query_id
stringlengths 32
32
| query
stringlengths 6
3.9k
| positive_passages
listlengths 1
21
| negative_passages
listlengths 10
100
| subset
stringclasses 7
values |
---|---|---|---|---|
738dbe44da2eeb389ff42e8d9c0eb050
|
Gated Recursive Neural Network for Chinese Word Segmentation
|
[
{
"docid": "1af1ab4da0fe4368b1ad97801c4eb015",
"text": "Standard approaches to Chinese word segmentation treat the problem as a tagging task, assigning labels to the characters in the sequence indicating whether the character marks a word boundary. Discriminatively trained models based on local character features are used to make the tagging decisions, with Viterbi decoding finding the highest scoring segmentation. In this paper we propose an alternative, word-based segmentor, which uses features based on complete words and word sequences. The generalized perceptron algorithm is used for discriminative training, and we use a beamsearch decoder. Closed tests on the first and secondSIGHAN bakeoffs show that our system is competitive with the best in the literature, achieving the highest reported F-scores for a number of corpora.",
"title": ""
}
] |
[
{
"docid": "e2dbcae54c48a88f840e09112c55fa86",
"text": "This paper aims to improve the throughput of a broadcasting system that supports the transmission of multiple services with differentiated minimum signal-to-noise ratios (SNRs) required for successful receptions simultaneously. We propose a novel multiplexing method called bit division multiplexing (BDM), which outperforms the conventional time division multiplexing (TDM) counterpart by extending the multiplexing from symbol level to bit level. Benefiting from multiple error protection levels of bits within each high-order constellation symbol, BDM can provide so-called nonlinear allocation of the channel resources. Both average mutual information (AMI) analysis and simulation results demonstrate that, compared with TDM, BDM can significantly improve the overall transmission rate of multiple services subject to the differentiated minimum SNRs required for successful receptions, or decrease the minimum SNRs required for successful receptions subject to the transmission rate requirements of multiple services.",
"title": ""
},
{
"docid": "b17a8e121f865b7143bc2e38fa367b07",
"text": "Radio frequency (r.f.) has been investigated as a means of externally powering miniature and long term implant telemetry systems. Optimum power transfer from the transmitter to the receiving coil is desired for total system efficiency. A seven step design procedure for the transmitting and receiving coils is described based on r.f., coil diameter, coil spacing, load and the number of turns of the coil. An inductance tapping circuit and a voltage doubler circuit have been built in accordance with the design procedure. Experimental results were within the desired total system efficiency ranges of 18% and 23%, respectively. On a étudié la fréquence radio (f.r.) en tant que source extérieure permettant de faire fonctionner les systèmes télémétriques d'implants miniatures à long terme. Afin d'assurer une efficacité totale au système, il est nécessaire d'obtenir un transfert de puissance optimum de l'émetteur à la bobine réceptrice. On donne la description d'une technique de conception en sept temps, fondée sur la fréquence radio, le diamètre de la bobine, l'espacement des spires, la charge et le nombre de tours de la bobine. Un circuit de captage de tension par induction et un circuit doubleur de tension ont été construits conformément à la méthode de conception. Les résultats expérimentaux étaient compris dans les limites d'efficacité totale souhaitable pour le système, soit 18% à 23%, respectivement. Hochfrequenz wurde als Mittel zur externen Energieversorgung von Miniatur und langfristigen Implantat-Telemetriesystemen untersucht. Zur Verwirklichung der höchsten Leistungsfähigkeit braucht das System optimale Energieübertragung von Sendegerät zu Empfangsspule. Ein auf Hochfrequenz beruhendes siebenstufiges Konstruktionssystem für Sende- und Empfangsspulen wird beschrieben, mit Hinweisen über Spulendurchmesser, Spulenanordnung, Ladung und die Anzahl der Wicklungen. Ein Induktionsanzapfstromkreis und ein Spannungsverdoppler wurden dem Konstruktionsverfahren entsprechend gebaut. Versuchsergebnisse lagen im Bereich des gewünschten Systemleistungsgrades von 18% und 23%.",
"title": ""
},
{
"docid": "28ab07763d682ae367b5c9ebd9c9ef13",
"text": "Nowadays, the teaching-learning processes are constantly changing, one of the latest modifications promises to strengthen the development of digital skills and thinking in the participants, from an early age. In this sense, the present article shows the advances of a study oriented to the formation of programming abilities, computational thinking and collaborative learning in an initial education context. As part of the study it was initially proposed to conduct a training day for teachers who will participate in the experimental phase of the research, considering this human resource as a link of great importance to achieve maximum use of students in the development of curricular themes of the level, using ICT resources and programmable educational robots. The criterion and the positive acceptance expressed by the teaching group after the evaluation applied at the end of the session, constitute a good starting point for the development of the following activities that make up the research in progress.",
"title": ""
},
{
"docid": "7e92b2c7f39b7200dd8b9330676294b9",
"text": "Realizing the democratic promise of nanopore sequencing requires the development of new bioinformatics approaches to deal with its specific error characteristics. Here we present GraphMap, a mapping algorithm designed to analyse nanopore sequencing reads, which progressively refines candidate alignments to robustly handle potentially high-error rates and a fast graph traversal to align long reads with speed and high precision (>95%). Evaluation on MinION sequencing data sets against short- and long-read mappers indicates that GraphMap increases mapping sensitivity by 10-80% and maps >95% of bases. GraphMap alignments enabled single-nucleotide variant calling on the human genome with increased sensitivity (15%) over the next best mapper, precise detection of structural variants from length 100 bp to 4 kbp, and species and strain-specific identification of pathogens using MinION reads. GraphMap is available open source under the MIT license at https://github.com/isovic/graphmap.",
"title": ""
},
{
"docid": "4fbd13e1bcbb78bac456addce272cbe6",
"text": "Musical memory is considered to be partly independent from other memory systems. In Alzheimer's disease and different types of dementia, musical memory is surprisingly robust, and likewise for brain lesions affecting other kinds of memory. However, the mechanisms and neural substrates of musical memory remain poorly understood. In a group of 32 normal young human subjects (16 male and 16 female, mean age of 28.0 ± 2.2 years), we performed a 7 T functional magnetic resonance imaging study of brain responses to music excerpts that were unknown, recently known (heard an hour before scanning), and long-known. We used multivariate pattern classification to identify brain regions that encode long-term musical memory. The results showed a crucial role for the caudal anterior cingulate and the ventral pre-supplementary motor area in the neural encoding of long-known as compared with recently known and unknown music. In the second part of the study, we analysed data of three essential Alzheimer's disease biomarkers in a region of interest derived from our musical memory findings (caudal anterior cingulate cortex and ventral pre-supplementary motor area) in 20 patients with Alzheimer's disease (10 male and 10 female, mean age of 68.9 ± 9.0 years) and 34 healthy control subjects (14 male and 20 female, mean age of 68.1 ± 7.2 years). Interestingly, the regions identified to encode musical memory corresponded to areas that showed substantially minimal cortical atrophy (as measured with magnetic resonance imaging), and minimal disruption of glucose-metabolism (as measured with (18)F-fluorodeoxyglucose positron emission tomography), as compared to the rest of the brain. However, amyloid-β deposition (as measured with (18)F-flobetapir positron emission tomography) within the currently observed regions of interest was not substantially less than in the rest of the brain, which suggests that the regions of interest were still in a very early stage of the expected course of biomarker development in these regions (amyloid accumulation → hypometabolism → cortical atrophy) and therefore relatively well preserved. Given the observed overlap of musical memory regions with areas that are relatively spared in Alzheimer's disease, the current findings may thus explain the surprising preservation of musical memory in this neurodegenerative disease.",
"title": ""
},
{
"docid": "8673422c05f762241bf1017df0c02199",
"text": "Despite the voluminous literatures on testing effects and lag effects, surprisingly few studies have examined whether testing and lag effects interact, and no prior research has directly investigated why this might be the case. To this end, in the present research we evaluated the elaborative retrieval hypothesis (ERH) as a possible explanation for why testing effects depend on lag. Elaborative retrieval involves the activation of cue-related information during the long-term memory search for the target. If the target is successfully retrieved, this additional information is encoded with the cue-target pair to yield a more elaborated memory trace that enhances target access on a later memory test. The ERH states that the degree of elaborative retrieval during practice is greater when testing takes place after a long rather than a short lag (whereas elaborative retrieval during restudy is minimal at either lag). Across two experiments, final-test performance was greater following practice testing than following restudy only, and this memorial advantage was greater with long-lag than with short-lag practice. The final test also included novel cue conditions used to diagnose the degree of elaborative retrieval during practice. The overall pattern of performance in these conditions provided consistent evidence for the ERH, with more extensive elaborative retrieval during long- than during short-lag practice testing.",
"title": ""
},
{
"docid": "d71016d17677eeefb7bdfb66e6077885",
"text": "Meaningless computer generated scientific texts can be used in several ways. For example, they have allowed Ike Antkare to become one of the most highly cited scientists of the modern world. Such fake publications are also appearing in real scientific conferences and, as a result, in the bibliographic services (Scopus, ISI-Web of Knowledge, Google Scholar,...). Recently, more than 120 papers have been withdrawn from subscription databases of two high-profile publishers, IEEE and Springer, because they were computer generated thanks to the SCIgen software. This software, based on a Probabilistic Context Free Grammar (PCFG), was designed to randomly generate computer science research papers. Together with PCFG, Markov Chains (MC) are the mains ways to generated Meaning-less texts. This paper presents the mains characteristic of texts generated by PCFG and MC. For the time being, PCFG generators are quite easy to spot by an automatic way, using intertextual distance combined with automatic clustering, because these generators are behaving like authors with specifics features such as a very low vocabulary richness and unusual sentence structures. This shows that quantitative tools are effective to characterize originality (or banality) of authors’ language. Cyril Labbé Univ. Grenoble Alpes, LIG, F-38000 Grenoble, France, e-mail: [email protected] CNRS, LIG, F-38000 Grenoble, France Dominique Labbé Univ. Grenoble Alpes, PACTE, F-38000 Grenoble, France, e-mail: [email protected] CNRS, PACTE, F-38000 Grenoble, France François Portet Univ. Grenoble Alpes, LIG, F-38000 Grenoble, France, e-mail: [email protected] CNRS, LIG, F-38000 Grenoble, France",
"title": ""
},
{
"docid": "3869380cc8c8fa32ebe9f26be5275a32",
"text": "A coupled 2D-3D finite element model developed with COMSOL Multiphysics software platform is proposed to design novel acoustic particle velocity sensors. The device consists of four silicided polysilicon wires arranged in a Wheatstone full-bridge configuration. Each wire has been divided into three segments placed over suspended silicon dioxide membranes. The dependence of the device sensitivity and frequency response on geometric dimensions has been investigated.",
"title": ""
},
{
"docid": "da3650998a4bd6ea31467daa631d0e05",
"text": "Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.",
"title": ""
},
{
"docid": "077e4307caf9ac3c1f9185f0eaf58524",
"text": "Many text mining tools cannot be applied directly to documents available on web pages. There are tools for fetching and preprocessing of textual data, but combining them in one working tool chain can be time consuming. The preprocessing task is even more labor-intensive if documents are located on multiple remote sources with different storage formats. In this paper we propose the simplification of data preparation process for cases when data come from wide range of web resources. We developed an open-sourced tool, called Kayur, that greatly minimizes time and effort required for routine data preprocessing steps, allowing to quickly proceed to the main task of data analysis. The datasets generated by the tool are ready to be loaded into a data mining workbench, such as WEKA or Carrot2, to perform classification, feature prediction, and other data mining tasks.",
"title": ""
},
{
"docid": "8dec8f3fd456174bb460e24161eb6903",
"text": "Developments in pervasive computing introduced a new world of computing where networked processors embedded and distributed in everyday objects communicating with each other over wireless links. Computers in such environments work in the background while establishing connections among them dynamically and hence will be less visible and intrusive. Such a vision raises questions about how to manage issues like privacy, trust and identity in those environments. In this paper, we review the technical challenges that face pervasive computing environments in relation to each of these issues. We then present a number of security related considerations and use them as a basis for comparison between pervasive and traditional computing. We will argue that these considerations pose particular concerns and challenges to the design and implementation of pervasive environments which are different to those usually found in traditional computing environments. To address these concerns and challenges, further research is needed. We will present a number of directions and topics for possible future research with respect to each of the three issues.",
"title": ""
},
{
"docid": "a7f1565d548359c9f19bed304c2fbba6",
"text": "Handwritten character generation is a popular research topic with various applications. Various methods have been proposed in the literatures which are based on methods such as pattern recognition, machine learning, deep learning or others. However, seldom method could generate realistic and natural handwritten characters with a built-in determination mechanism to enhance the quality of generated image and make the observers unable to tell whether they are written by a person. To address these problems, in this paper, we proposed a novel generative adversarial network, multi-scale multi-class generative adversarial network (MSMC-CGAN). It is a neural network based on conditional generative adversarial network (CGAN), and it is designed for realistic multi-scale character generation. MSMC-CGAN combines the global and partial image information as condition, and the condition can also help us to generate multi-class handwritten characters. Our model is designed with unique neural network structures, image features and training method. To validate the performance of our model, we utilized it in Chinese handwriting generation, and an evaluation method called mean opinion score (MOS) was used. The MOS results show that MSMC-CGAN achieved good performance.",
"title": ""
},
{
"docid": "454d4211d068fb5009cced3a3dca774b",
"text": "The occurrence of ferroresonance oscillations of high voltage inductive (electromagnetic) voltage transformers (VT) has been recorded and reported in a number of papers and reports. Because of its non-linear characteristic, inductive voltage transformer has the possibility of causing ferroresonance with capacitances present in the transmission network, if initiated by a transient occurrence such as switching operation or fault. One of the solutions for ferroresonance mitigation is introducing an air gap into voltage transformer core magnetic path, thus linearizing its magnetizing characteristic and decreasing the possibility of a ferroresonance occurrence. This paper presents results of numerical ATP-EMTP simulation of typical ferroresonance situation involving inductive voltage transformers in high voltage networks with circuit breaker opening operation after which the voltage transformer remains energized through the circuit breaker grading capacitance. Main variable in calculating to the ferroresonance occurrence probability was the magnetizing characteristic change caused by the introduction of an air gap to the VT core, and separate diagrams are presented for VTs with different air gap length, including the paramount gapped transformer design – open core voltage transformers.",
"title": ""
},
{
"docid": "12357019e2805e88b2bd47bfb331ffd7",
"text": "This paper presents a deep neural solver to automatically solve math word problems. In contrast to previous statistical learning approaches, we directly translate math word problems to equation templates using a recurrent neural network (RNN) model, without sophisticated feature engineering. We further design a hybrid model that combines the RNN model and a similarity-based retrieval model to achieve additional performance improvement. Experiments conducted on a large dataset show that the RNN model and the hybrid model significantly outperform stateof-the-art statistical learning methods for math word problem solving.",
"title": ""
},
{
"docid": "3ae3e7f38be2f2d989dde298a64d9ba4",
"text": "A number of compilers exploit the following strategy: translate a term to continuation-passing style (CPS) and optimize the resulting term using a sequence of reductions. Recent work suggests that an alternative strategy is superior: optimize directly in an extended source calculus. We suggest that the appropriate relation between the source and target calculi may be captured by a special case of a Galois connection known as a reflection. Previous work has focused on the weaker notion of an equational correspondence, which is based on equality rather than reduction. We show that Moggi's monad translation and Plotkin's CPS translation can both be regarded as reflections, and thereby strengthen a number of results in the literature.",
"title": ""
},
{
"docid": "7ccfa843351f59c3cf618e13bd0233d5",
"text": "Collaborative Filtering (CF) is the most popular method for recommender systems. The principal idea of CF is that users might be interested in items that are favorited by similar users, and most of the existing CF methods measure users' preferences by their behaviours over all the items. However, users might have different interests over different topics, thus might share similar preferences with different groups of users over different sets of items. In this paper, we propose a novel and scalable method CCCF which improves the performance of CF methods via user-item co-clustering. CCCF first clusters users and items into several subgroups, where each subgroup includes a set of like-minded users and a set of items in which these users share their interests. Then, traditional CF methods can be easily applied to each subgroup, and the recommendation results from all the subgroups can be easily aggregated. Compared with previous works, CCCF has several advantages including scalability, flexibility, interpretability and extensibility. Experimental results on four real world data sets demonstrate that the proposed method significantly improves the performance of several state-of-the-art recommendation algorithms.",
"title": ""
},
{
"docid": "876c0be7acfa5d7b9e863da5b7cfefdc",
"text": "In the era of big data, one is often confronted with the problem of high dimensional data for many machine learning or data mining tasks. Feature selection, as a dimension reduction technique, is useful for alleviating the curse of dimensionality while preserving interpretability. In this paper, we focus on unsupervised feature selection, as class labels are usually expensive to obtain. Unsupervised feature selection is typically more challenging than its supervised counterpart due to the lack of guidance from class labels. Recently, regression-based methods with L2,1 norms have gained much popularity as they are able to evaluate features jointly which, however, consider only linear correlations between features and pseudo-labels. In this paper, we propose a novel nonlinear joint unsupervised feature selection method based on kernel alignment. The aim is to find a succinct set of features that best aligns with the original features in the kernel space. It can evaluate features jointly in a nonlinear manner and provides a good ‘0/1’ approximation for the selection indicator vector. We formulate it as a constrained optimization problem and develop a Spectral Projected Gradient (SPG) method to solve the optimization problem. Experimental results on several real-world datasets demonstrate that our proposed method outperforms the state-of-the-art approaches significantly.",
"title": ""
},
{
"docid": "a6ddbe0f834c38079282db91599e076d",
"text": "BACKGROUND\nThe efficacy of closure of a patent foramen ovale (PFO) in the prevention of recurrent stroke after cryptogenic stroke is uncertain. We investigated the effect of PFO closure combined with antiplatelet therapy versus antiplatelet therapy alone on the risks of recurrent stroke and new brain infarctions.\n\n\nMETHODS\nIn this multinational trial involving patients with a PFO who had had a cryptogenic stroke, we randomly assigned patients, in a 2:1 ratio, to undergo PFO closure plus antiplatelet therapy (PFO closure group) or to receive antiplatelet therapy alone (antiplatelet-only group). Imaging of the brain was performed at the baseline screening and at 24 months. The coprimary end points were freedom from clinical evidence of ischemic stroke (reported here as the percentage of patients who had a recurrence of stroke) through at least 24 months after randomization and the 24-month incidence of new brain infarction, which was a composite of clinical ischemic stroke or silent brain infarction detected on imaging.\n\n\nRESULTS\nWe enrolled 664 patients (mean age, 45.2 years), of whom 81% had moderate or large interatrial shunts. During a median follow-up of 3.2 years, clinical ischemic stroke occurred in 6 of 441 patients (1.4%) in the PFO closure group and in 12 of 223 patients (5.4%) in the antiplatelet-only group (hazard ratio, 0.23; 95% confidence interval [CI], 0.09 to 0.62; P=0.002). The incidence of new brain infarctions was significantly lower in the PFO closure group than in the antiplatelet-only group (22 patients [5.7%] vs. 20 patients [11.3%]; relative risk, 0.51; 95% CI, 0.29 to 0.91; P=0.04), but the incidence of silent brain infarction did not differ significantly between the study groups (P=0.97). Serious adverse events occurred in 23.1% of the patients in the PFO closure group and in 27.8% of the patients in the antiplatelet-only group (P=0.22). Serious device-related adverse events occurred in 6 patients (1.4%) in the PFO closure group, and atrial fibrillation occurred in 29 patients (6.6%) after PFO closure.\n\n\nCONCLUSIONS\nAmong patients with a PFO who had had a cryptogenic stroke, the risk of subsequent ischemic stroke was lower among those assigned to PFO closure combined with antiplatelet therapy than among those assigned to antiplatelet therapy alone; however, PFO closure was associated with higher rates of device complications and atrial fibrillation. (Funded by W.L. Gore and Associates; Gore REDUCE ClinicalTrials.gov number, NCT00738894 .).",
"title": ""
},
{
"docid": "162f080444935117c5125ae8b7c3d51e",
"text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1",
"title": ""
},
{
"docid": "c84d41e54b12cca847135dfc2e9e13f8",
"text": "PURPOSE\nBaseline restraint prevalence for surgical step-down unit was 5.08%, and for surgical intensive care unit, it was 25.93%, greater than the National Database of Nursing Quality Indicators (NDNQI) mean. Project goal was sustained restraint reduction below the NDNQI mean and maintaining patient safety.\n\n\nBACKGROUND/RATIONALE\nSoft wrist restraints are utilized for falls reduction and preventing device removal but are not universally effective and may put patients at risk of injury. Decreasing use of restrictive devices enhances patient safety and decreases risk of injury.\n\n\nDESCRIPTION\nPhase 1 consisted of advanced practice nurse-facilitated restraint rounds on each restrained patient including multidisciplinary assessment and critical thinking with bedside clinicians including reevaluation for treatable causes of agitation and restraint indications. Phase 2 evaluated less restrictive mitts, padded belts, and elbow splint devices. Following a 4-month trial, phase 3 expanded the restraint initiative including critical care requiring education and collaboration among advanced practice nurses, physician team members, and nurse champions.\n\n\nEVALUATION AND OUTCOMES\nPhase 1 decreased surgical step-down unit restraint prevalence from 5.08% to 3.57%. Phase 2 decreased restraint prevalence from 3.57% to 1.67%, less than the NDNQI mean. Phase 3 expansion in surgical intensive care units resulted in wrist restraint prevalence from 18.19% to 7.12% within the first year, maintained less than the NDNQI benchmarks while preserving patient safety.\n\n\nINTERPRETATION/CONCLUSION\nThe initiative produced sustained reduction in acute/critical care well below the NDNQI mean without corresponding increase in patient medical device removal.\n\n\nIMPLICATIONS\nBy managing causes of agitation, need for restraints is decreased, protecting patients from injury and increasing patient satisfaction. Follow-up research may explore patient experiences with and without restrictive device use.",
"title": ""
}
] |
scidocsrr
|
582392b3533e5ee78a91edb8079783d1
|
Annotating and Automatically Tagging Constructions of Causal Language
|
[
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "8093101949a96d27082712ce086bf11f",
"text": "Transition-based dependency parsers often need sequences of local shift and reduce operations to produce certain attachments. Correct individual decisions hence require global information about the sentence context and mistakes cause error propagation. This paper proposes a novel transition system, arc-swift, that enables direct attachments between tokens farther apart with a single transition. This allows the parser to leverage lexical information more directly in transition decisions. Hence, arc-swift can achieve significantly better performance with a very small beam size. Our parsers reduce error by 3.7–7.6% relative to those using existing transition systems on the Penn Treebank dependency parsing task and English Universal Dependencies.",
"title": ""
}
] |
[
{
"docid": "3339aada96140d392182281a6c819f93",
"text": "Biometric applications have been used globally in everyday life. However, conventional biometrics is created and optimized for high-security scenarios. Being used in daily life by ordinary untrained people is a new challenge. Facing this challenge, designing a biometric system with prior constraints of ergonomics, we propose ergonomic biometrics design model, which attains the physiological factors, the psychological factors, and the conventional security characteristics. With this model, a novel hand-based biometric system, door knob hand recognition system (DKHRS), is proposed. DKHRS has the identical appearance of a conventional door knob, which is an optimum solution in both physiological factors and psychological factors. In this system, a hand image is captured by door knob imaging scheme, which is a tailored omnivision imaging structure and is optimized for this predetermined door knob appearance. Then features are extracted by local Gabor binary pattern histogram sequence method and classified by projective dictionary pair learning. In the experiment on a large data set including 12 000 images from 200 people, the proposed system achieves competitive recognition performance comparing with conventional biometrics like face and fingerprint recognition systems, with an equal error rate of 0.091%. This paper shows that a biometric system could be built with a reliable recognition performance under the ergonomic constraints.",
"title": ""
},
{
"docid": "866abb0de36960fba889282d67ce9dbd",
"text": "We present our experience with the use of local fasciocutaneous V-Y advancement flaps in the reconstruction of 10 axillae in 6 patients for large defects following wide excision of long-standing Hidradenitis suppurativa of the axilla. The defects were closed with local V-Y subcutaneous island flaps. A single flap from the chest wall was sufficient for moderate defects. However, for larger defects, an additional flap was taken from the medial side of the ipsilateral arm. The donor defects could be closed primarily in all the patients. The local areas of the lateral chest wall and the medial side of the arm have a plentiful supply of cutaneous perforators and the flaps can be designed in a V-Y fashion without resorting to preoperative marking of the perforator. The flaps were freed sufficiently to allow adequate movement for closure of the defects. Although no attempt was made to identify the perforators specifically, many perforators were seen entering the flap. Some perforators can be safely divided to increase reach of the flap. All the flaps survived completely. A follow up of 2.5 years is presented.",
"title": ""
},
{
"docid": "6210d2da6100adbd4db89a983d00419f",
"text": "Many binary code encoding schemes based on hashing have been actively studied recently, since they can provide efficient similarity search, especially nearest neighbor search, and compact data representations suitable for handling large scale image databases in many computer vision problems. Existing hashing techniques encode high-dimensional data points by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. Furthermore, we propose a new binary code distance function, spherical Hamming distance, that is tailored to our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve balanced partitioning of data points for each hash function and independence between hashing functions. Our extensive experiments show that our spherical hashing technique significantly outperforms six state-of-the-art hashing techniques based on hyperplanes across various image benchmarks of sizes ranging from one to 75 million of GIST descriptors. The performance gains are consistent and large, up to 100% improvements. The excellent results confirm the unique merits of the proposed idea in using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.",
"title": ""
},
{
"docid": "5e2fc7744cc438a77373bc7694fc03ac",
"text": "Anisotropic impedance surfaces have been demonstrated to be useful for a variety of applications ranging from antennas, to surface wave guiding, to control of scattering. To increase their anisotropy requires elongated unit cells which have reduced symmetry and thus are not easily arranged into arbitrary patterns. We discuss the limitations of existing patterning techniques, and explore options for generating anisotropic impedance surfaces with arbitrary spatial variation. We present an approach that allows a wide range of anisotropic impedance profiles, based on a point-shifting method combined with a Voronoi cell generation technique. This approach can be used to produce patterns which include highly elongated cells with varying orientation, and cells which can smoothly transition between square, rectangular, hexagonal, and other shapes with a wide range of aspect ratios. We demonstrate a practical implementation of this technique which allows us to define gaps between the cells to generate impedance surfaces, and we use it to implement a simple example of a structure which requires smoothly varying impedance, in the form of a planar Luneberg lens. Simulations of the lens are verified by measurements, validating our pattern generation technique.",
"title": ""
},
{
"docid": "5026e994507ce6858114d86238b042d4",
"text": "The scope of scientific computing continues to grow and now includes diverse application areas such as network analysis, combinatorialcomputing, and knowledge discovery, to name just a few. Large problems in these application areas require HPC resources, but they exhibit computation and communication patterns that are irregular, fine-grained, and non-local, making it difficult to apply traditional HPC approaches to achieve scalable solutions. In this paper we present Active Pebbles, a programming and execution model developed explicitly to enable the development of scalable software for these emerging application areas. Our approach relies on five main techniques--scalable addressing, active routing, message coalescing, message reduction, and termination detection--to separate algorithm expression from communication optimization. Using this approach, algorithms can be expressed in their natural forms, with their natural levels of granularity, while optimizations necessary for scalability can be applied automatically to match the characteristics of particular machines. We implement several example kernels using both Active Pebbles and existing programming models, evaluating both programmability and performance. Our experimental results demonstrate that the Active Pebbles model can succinctly and directly express irregular application kernels, while still achieving performance comparable to MPI-based implementations that are significantly more complex.",
"title": ""
},
{
"docid": "4e0e6ca2f4e145c17743c42944da4cc8",
"text": "We demonstrate that, by using a recently proposed leveled homomorphic encryption scheme, it is possible to delegate the execution of a machine learning algorithm to a computing service while retaining confidentiality of the training and test data. Since the computational complexity of the homomorphic encryption scheme depends primarily on the number of levels of multiplications to be carried out on the encrypted data, we define a new class of machine learning algorithms in which the algorithm’s predictions, viewed as functions of the input data, can be expressed as polynomials of bounded degree. We propose confidential algorithms for binary classification based on polynomial approximations to least-squares solutions obtained by a small number of gradient descent steps. We present experimental validation of the confidential machine learning pipeline and discuss the trade-offs regarding computational complexity, prediction accuracy and cryptographic security.",
"title": ""
},
{
"docid": "5f4330e3ddd6339cf340a72c73d2106b",
"text": "As a new trend for data-intensive computing, real-time stream computing is gaining significant attention in the big data era. In theory, stream computing is an effective way to support big data by providing extremely low-latency processing tools and massively parallel processing architectures in real-time data analysis. However, in most existing stream computing environments, how to efficiently deal with big data stream computing, and how to build efficient big data stream computing systems are posing great challenges to big data computing research. First, the data stream graphs and the system architecture for big data stream computing, and some related key technologies, such as system structure, data transmission, application interfaces, and high availability, are systemically researched. Then, we give a classification of the latest research and depict the development status of some popular big data stream computing systems, including Twitter Storm, Yahoo! S4, Microsoft TimeStream, and Microsoft Naiad. Finally, the potential challenges and future directions of big data stream computing are discussed. 11.",
"title": ""
},
{
"docid": "c496424323fa958e09bbe0f6504f842d",
"text": "In this research a new hybrid prediction algorithm for breast cancer has been made from a breast cancer data set. Many approaches are available in diagnosing the medical diseases like genetic algorithm, ant colony optimization, particle swarm optimization, cuckoo search algorithm, etc., The proposed algorithm uses a ReliefF attribute reduction with entropy based genetic algorithm for breast cancer detection. The hybrid combination of these techniques is used to handle the dataset with high dimension and uncertainties. The data are obtained from the Wisconsin breast cancer dataset; these data have been categorized based on different properties. The performance of the proposed method is evaluated and the results are compared with other well known feature selection methods. The obtained result shows that the proposed method has a remarkable ability to generate reduced-size subset of salient features while yielding significant classification accuracy for large datasets.",
"title": ""
},
{
"docid": "7cce3ad08afe6c35046da014d82fc1ef",
"text": "The developmental histories of 32 players in the Australian Football League (AFL), independently classified as either expert or less skilled in their perceptual and decision-making skills, were collected through a structured interview process and their year-on-year involvement in structured and deliberate play activities retrospectively determined. Despite being drawn from the same elite level of competition, the expert decision-makers differed from the less skilled in having accrued, during their developing years, more hours of experience in structured activities of all types, in structured activities in invasion-type sports, in invasion-type deliberate play, and in invasion activities from sports other than Australian football. Accumulated hours invested in invasion-type activities differentiated between the groups, suggesting that it is the amount of invasion-type activity that is experienced and not necessarily intent (skill development or fun) or specificity that facilitates the development of perceptual and decision-making expertise in this team sport.",
"title": ""
},
{
"docid": "9f01b1e2bbc2d2b940c04f07b05bf5bb",
"text": "Inferior parietal lobule (IPL) neurons were studied when monkeys performed motor acts embedded in different actions and when they observed similar acts done by an experimenter. Most motor IPL neurons coding a specific act (e.g., grasping) showed markedly different activations when this act was part of different actions (e.g., for eating or for placing). Many motor IPL neurons also discharged during the observation of acts done by others. Most responded differentially when the same observed act was embedded in a specific action. These neurons fired during the observation of an act, before the beginning of the subsequent acts specifying the action. Thus, these neurons not only code the observed motor act but also allow the observer to understand the agent's intentions.",
"title": ""
},
{
"docid": "92abe28875dbe72fbc16bdf41b324126",
"text": "We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Further, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained via supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers. 1",
"title": ""
},
{
"docid": "f1cbd60e1bd721e185bbbd12c133ad91",
"text": "Defect prediction models are a well-known technique for identifying defect-prone files or packages such that practitioners can allocate their quality assurance efforts (e.g., testing and code reviews). However, once the critical files or packages have been identified, developers still need to spend considerable time drilling down to the functions or even code snippets that should be reviewed or tested. This makes the approach too time consuming and impractical for large software systems. Instead, we consider defect prediction models that focus on identifying defect-prone (“risky”) software changes instead of files or packages. We refer to this type of quality assurance activity as “Just-In-Time Quality Assurance,” because developers can review and test these risky changes while they are still fresh in their minds (i.e., at check-in time). To build a change risk model, we use a wide range of factors based on the characteristics of a software change, such as the number of added lines, and developer experience. A large-scale study of six open source and five commercial projects from multiple domains shows that our models can predict whether or not a change will lead to a defect with an average accuracy of 68 percent and an average recall of 64 percent. Furthermore, when considering the effort needed to review changes, we find that using only 20 percent of the effort it would take to inspect all changes, we can identify 35 percent of all defect-inducing changes. Our findings indicate that “Just-In-Time Quality Assurance” may provide an effort-reducing way to focus on the most risky changes and thus reduce the costs of developing high-quality software.",
"title": ""
},
{
"docid": "9c2ce030230ccd91fdbfbd9544596604",
"text": "The kind of causal inference seen in natural human thought can be \"algorithmitized\" to help produce human-level machine intelligence.",
"title": ""
},
{
"docid": "ff664eac9ffb8cae9b4db1bc09629935",
"text": "In this paper, we apply sentiment analysis and machine learning principles to find the correlation between ”public sentiment” and ”market sentiment”. We use twitter data to predict public mood and use the predicted mood and previous days’ DJIA values to predict the stock market movements. In order to test our results, we propose a new cross validation method for financial data and obtain 75.56% accuracy using Self Organizing Fuzzy Neural Networks (SOFNN) on the Twitter feeds and DJIA values from the period June 2009 to December 2009. We also implement a naive protfolio management strategy based on our predicted values. Our work is based on Bollen et al’s famous paper which predicted the same with 87% accuracy.",
"title": ""
},
{
"docid": "f7ff118b8f39fa0843c4861306b4910f",
"text": "This article proposes a novel character-aware neural machine translation (NMT) model that views the input sequences as sequences of characters rather than words. On the use of row convolution (Amodei et al., 2015), the encoder of the proposed model composes word-level information from the input sequences of characters automatically. Since our model doesn’t rely on the boundaries between each word (as the whitespace boundaries in English), it is also applied to languages without explicit word segmentations (like Chinese). Experimental results on Chinese-English translation tasks show that the proposed character-aware NMT model can achieve comparable translation performance with the traditional word based NMT models. Despite the target side is still word based, the proposed model is able to generate much less unknown words.",
"title": ""
},
{
"docid": "0c4f09c41c35690de71f106403d14223",
"text": "This paper views Islamist radicals as self-interested political revolutionaries and builds on a general model of political extremism developed in a previous paper (Ferrero, 2002), where extremism is modelled as a production factor whose effect on expected revenue is initially positive and then turns negative, and whose level is optimally chosen by a revolutionary organization. The organization is bound by a free-access constraint and hence uses the degree of extremism as a means of indirectly controlling its level of membership with the aim of maximizing expected per capita income of its members, like a producer co-operative. The gist of the argument is that radicalization may be an optimal reaction to perceived failure (a widespread perception in the Muslim world) when political activists are, at the margin, relatively strongly averse to effort but not so averse to extremism, a configuration that is at odds with secular, Western-style revolutionary politics but seems to capture well the essence of Islamic revolutionary politics, embedded as it is in a doctrinal framework.",
"title": ""
},
{
"docid": "1a834cb0c5d72c6bc58c4898d318cfc2",
"text": "This paper proposes a novel single-stage high-power-factor ac/dc converter with symmetrical topology. The circuit topology is derived from the integration of two buck-boost power-factor-correction (PFC) converters and a full-bridge series resonant dc/dc converter. The switch-utilization factor is improved by using two active switches to serve in the PFC circuits. A high power factor at the input line is assured by operating the buck-boost converters at discontinuous conduction mode. With symmetrical operation and elaborately designed circuit parameters, zero-voltage switching on all the active power switches of the converter can be retained to achieve high circuit efficiency. The operation modes, design equations, and design steps for the circuit parameters are proposed. A prototype circuit designed for a 200-W dc output was built and tested to verify the analytical predictions. Satisfactory performances are obtained from the experimental results.",
"title": ""
},
{
"docid": "f0ced128e23c4f17abc635f88178a6c1",
"text": "This paper explores liquidity risk in a system of interconnected financial institutions when these institutions are subject to regulatory solvency constraints. When the market’s demand for illiquid assets is less than perfectly elastic, sales by distressed institutions depress the market price of such assets. Marking to market of the asset book can induce a further round of endogenously generated sales of assets, depressing prices further and inducing further sales. Contagious failures can result from small shocks. We investigate the theoretical basis for contagious failures and quantify them through simulation exercises. Liquidity requirements on institutions can be as effective as capital requirements in forestalling contagious failures. ∗First version. We thank Andy Haldane and Vicky Saporta for their comments during the preparation of this paper. The opinions expressed in this paper are those of the authors, and do not necessarily reflect those of the Central Bank of Chile, or the Bank of England. Please direct any correspondence to Hyun Song Shin, [email protected].",
"title": ""
},
{
"docid": "923363771ee11cc5b06917385f5832c0",
"text": "This article presents a novel automatic method (AutoSummENG) for the evaluation of summarization systems, based on comparing the character n-gram graphs representation of the extracted summaries and a number of model summaries. The presented approach is language neutral, due to its statistical nature, and appears to hold a level of evaluation performance that matches and even exceeds other contemporary evaluation methods. Within this study, we measure the effectiveness of different representation methods, namely, word and character n-gram graph and histogram, different n-gram neighborhood indication methods as well as different comparison methods between the supplied representations. A theory for the a priori determination of the methods' parameters along with supporting experiments concludes the study to provide a complete alternative to existing methods concerning the automatic summary system evaluation process.",
"title": ""
},
{
"docid": "91b386ef617f75dd480e44708eb5a521",
"text": "The recent rise of interest in Virtual Reality (VR) came with the availability of commodity commercial VR products, such as the Head Mounted Displays (HMD) created by Oculus and other vendors. To accelerate the user adoption of VR headsets, content providers should focus on producing high quality immersive content for these devices. Similarly, multimedia streaming service providers should enable the means to stream 360 VR content on their platforms. In this study, we try to cover different aspects related to VR content representation, streaming, and quality assessment that will help establishing the basic knowledge of how to build a VR streaming system.",
"title": ""
}
] |
scidocsrr
|
db2aeb17fd88294ddef7a72cc4f1a260
|
Data Management Challenges in Production Machine Learning
|
[
{
"docid": "79593cc56da377d834f33528b833641f",
"text": "Machine learning offers a fantastically powerful toolkit f or building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the framework of technical debt , we note that it is remarkably easy to incur massive ongoing maintenance costs at the system level when applying machine learning. The goal of this paper is hig hlight several machine learning specific risk factors and design patterns to b e avoided or refactored where possible. These include boundary erosion, entanglem ent, hidden feedback loops, undeclared consumers, data dependencies, changes i n the external world, and a variety of system-level anti-patterns. 1 Machine Learning and Complex Systems Real world software engineers are often faced with the chall enge of moving quickly to ship new products or services, which can lead to a dilemma between spe ed of execution and quality of engineering. The concept of technical debtwas first introduced by Ward Cunningham in 1992 as a way to help quantify the cost of such decisions. Like incurri ng fiscal debt, there are often sound strategic reasons to take on technical debt. Not all debt is n ecessarily bad, but technical debt does tend to compound. Deferring the work to pay it off results in i ncreasing costs, system brittleness, and reduced rates of innovation. Traditional methods of paying off technical debt include re factoring, increasing coverage of unit tests, deleting dead code, reducing dependencies, tighten ng APIs, and improving documentation [4]. The goal of these activities is not to add new functionality, but to make it easier to add future improvements, be cheaper to maintain, and reduce the likeli hood of bugs. One of the basic arguments in this paper is that machine learn ing packages have all the basic code complexity issues as normal code, but also have a larger syst em-level complexity that can create hidden debt. Thus, refactoring these libraries, adding bet ter unit tests, and associated activity is time well spent but does not necessarily address debt at a systems level. In this paper, we focus on the system-level interaction betw e n machine learning code and larger systems as an area where hidden technical debt may rapidly accum ulate. At a system-level, a machine learning model may subtly erode abstraction boundaries. It may be tempting to re-use input signals in ways that create unintended tight coupling of otherw ise disjoint systems. Machine learning packages may often be treated as black boxes, resulting in la rge masses of “glue code” or calibration layers that can lock in assumptions. Changes in the exte rnal world may make models or input signals change behavior in unintended ways, ratcheting up m aintenance cost and the burden of any debt. Even monitoring that the system as a whole is operating s intended may be difficult without careful design.",
"title": ""
},
{
"docid": "2c4e44539ad3f6bd944eb01376ac34bf",
"text": "Closer integration of machine learning (ML) with data processing is a booming area in both the data management industry and academia. Almost all ML toolkits assume that the input is a single table, but many datasets are not stored as single tables due to normalization. Thus, analysts often perform key-foreign key joins to obtain features from all base tables and apply a feature selection method, either explicitly or implicitly, with the aim of improving accuracy. In this work, we show that the features brought in by such joins can often be ignored without affecting ML accuracy significantly, i.e., we can \"avoid joins safely.\" We identify the core technical issue that could cause accuracy to decrease in some cases and analyze this issue theoretically. Using simulations, we validate our analysis and measure the effects of various properties of normalized data on accuracy. We apply our analysis to design easy-to-understand decision rules to predict when it is safe to avoid joins in order to help analysts exploit this runtime-accuracy trade-off. Experiments with multiple real normalized datasets show that our rules are able to accurately predict when joins can be avoided safely, and in some cases, this led to significant reductions in the runtime of some popular feature selection methods.",
"title": ""
},
{
"docid": "9e35b35e679b7344c568c0edbad67a62",
"text": "Ground is an open-source data context service, a system to manage all the information that informs the use of data. Data usage has changed both philosophically and practically in the last decade, creating an opportunity for new data context services to foster further innovation. In this paper we frame the challenges of managing data context with basic ABCs: Applications, Behavior, and Change. We provide motivation and design guidelines, present our initial design of a common metamodel and API, and explore the current state of the storage solutions that could serve the needs of a data context service. Along the way we highlight opportunities for new research and engineering solutions. 1. FROM CRISIS TO OPPORTUNITY Traditional database management systems were developed in an era of risk-averse design. The technology itself was expensive, as was the on-site cost of managing it. Expertise was scarce and concentrated in a handful of computing and consulting firms. Two conservative design patterns emerged that lasted many decades. First, the accepted best practices for deploying databases revolved around tight control of schemas and data ingest in support of general-purpose accounting and compliance use cases. Typical advice from data warehousing leaders held that “There is no point in bringing data . . . into the data warehouse environment without integrating it” [15]. Second, the data management systems designed for these users were often built by a single vendor and deployed as a monolithic stack. A traditional DBMS included a consistent storage engine, a dataflow engine, a language compiler and optimizer, a runtime scheduler, a metadata catalog, and facilities for data ingest and queueing—all designed to work closely together. As computing and data have become orders of magnitude more efficient, changes have emerged for both of these patterns. Usage is changing profoundly, as expertise and control shifts from the central accountancy of an IT department to the domain expertise of “business units” tasked with extracting value from data [12]. The changes in economics and usage brought on the “three Vs” of Big Data: Volume, Velocity and Variety. Resulting best practices focus on open-ended schema-on-use data “lakes” and agile development, This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2017. CIDR ’17 January 8-11, 2017, Chaminade, CA, USA in support of exploratory analytics and innovative application intelligence [26]. Second, while many pieces of systems software that have emerged in this space are familiar, the overriding architecture is profoundly different. In today’s leading open source data management stacks, nearly all of the components of a traditional DBMS are explicitly independent and interchangeable. This architectural decoupling is a critical and under-appreciated aspect of the Big Data movement, enabling more rapid innovation and specialization. 1.1 Crisis: Big Metadata An unfortunate consequence of the disaggregated nature of contemporary data systems is the lack of a standard mechanism to assemble a collective understanding of the origin, scope, and usage of the data they manage. In the absence of a better solution to this pressing need, the Hive Metastore is sometimes used, but it only serves simple relational schemas—a dead end for representing a Variety of data. As a result, data lake projects typically lack even the most rudimentary information about the data they contain or how it is being used. For emerging Big Data customers and vendors, this Big Metadata problem is hitting a crisis point. Two significant classes of end-user problems follow directly from the absence of shared metadata services. The first is poor productivity. Analysts are often unable to discover what data exists, much less how it has been previously used by peers. Valuable data is left unused and human effort is routinely duplicated—particularly in a schema-on-use world with raw data that requires preparation. “Tribal knowledge” is a common description for how organizations manage this productivity problem. This is clearly not a systematic solution, and scales very poorly as organizations grow. The second problem stemming from the absence of a system to track metadata is governance risk. Data management necessarily entails tracking or controlling who accesses data, what they do with it, where they put it, and how it gets consumed downstream. In the absence of a standard place to store metadata and answer these questions, it is impossible to enforce policies and/or audit behavior. As a result, many administrators marginalize their Big Data stack as a playpen for non-critical data, and thereby inhibit both the adoption and the potential of new technologies. In our experiences deploying and managing systems in production, we have seen the need for a common service layer to support the capture, publishing and sharing of metadata information in a flexible way. The effort in this paper began by addressing that need. 1.2 Opportunity: Data Context The lack of metadata services in the Big Data stack can be viewed as an opportunity: a clean slate to rethink how we track and leverage modern usage of data. Storage economics and schema-on-use agility suggest that the Data Lake movement could go much farther than Data Warehousing in enabling diverse, widely-used central repositories of data that can adapt to new data formats and rapidly changing organizations. In that spirit, we advocate rethinking traditional metadata in a far more comprehensive sense. More generally, what we should strive to capture is the full context of data. To emphasize the conceptual shifts of this data context, and as a complement to the “three Vs” of Big Data, we introduce three key sources of information—the ABCs of Data Context. Each represents a major change from the simple metadata of traditional enterprise data management. Applications: Application context is the core information that describes how raw bits get interpreted for use. In modern agile scenarios, application context is often relativistic (many schemas for the same data) and complex (with custom code for data interpretation). Application context ranges from basic data descriptions (encodings, schemas, ontologies, tags), to statistical models and parameters, to user annotations. All of the artifacts involved—wrangling scripts, view definitions, model parameters, training sets, etc.—are critical aspects of application context. Behavior: This is information about how data was created and used over time. In decoupled systems, behavioral context spans multiple services, applications and formats and often originates from highvolume sources (e.g., machine-generated usage logs). Not only must we track upstream lineage— the data sets and code that led to the creation of a data object—we must also track the downstream lineage, including data products derived from this data object. Aside from data lineage, behavioral context includes logs of usage: the “digital exhaust” left behind by computations on the data. As a result, behavioral context metadata can often be larger than the data itself. Change: This is information about the version history of data, code and associated information, including changes over time to both structure and content. Traditional metadata focused on the present, but historical context is increasingly useful in agile organizations. This context can be a linear sequence of versions, or it can encompass branching and concurrent evolution, along with interactions between co-evolving versions. By tracking the version history of all objects spanning code, data, and entire analytics pipelines, we can simplify debugging and enable auditing and counterfactual analysis. Data context services represent an opportunity for database technology innovation, and an urgent requirement for the field. We are building an open-source data context service we call Ground, to serve as a central model, API and repository for capturing the broad context in which data gets used. Our goal is to address practical problems for the Big Data community in the short term and to open up opportunities for long-term research and innovation. In the remainder of the paper we illustrate the opportunities in this space, design requirements for solutions, and our initial efforts to tackle these challenges in open source. 2. DIVERSE USE CASES To illustrate the potential of the Ground data context service, we describe two concrete scenarios in which Ground can aid in data discovery, facilitate better collaboration, protect confidentiality, help diagnose problems, and ultimately enable new value to be captured from existing data. After presenting these scenarios, we explore the design requirements for a data context service. 2.1 Scenario: Context-Enabled Analytics This scenario represents the kind of usage we see in relatively technical organizations making aggressive use of data for machinelearning driven applications like customer targeting. In these organizations, data analysts make extensive use of flexible tools for data preparation and visualization and often have some SQL skills, while data scientists actively prototype and develop custom software for machine learning applications. Janet is an analyst in the Customer Satisfaction department at a large bank. She suspects that the social network behavior of customers can predict if they are likely to close their accounts (customer churn). Janet has access to a rich context-service-enabled data lake and a wide range of tools that she can use to assess her hypothesis. Janet begins by downloading a free sample of a social media feed. She uses an advanced data catalog application (we’ll call it “Catly”) which connects to Ground, recognizes the co",
"title": ""
}
] |
[
{
"docid": "0036a3053f872277f567ec7e5b94385d",
"text": "The sociotechnical paradigm legitimates our discipline and serves as core identity of IS. In this study, we want to focus on IS-induced human behavior by introducing a process model for nudging in IS. In behavioral economics, the concept of nudging has been proposed, which makes use of human cognitive processes and can direct people to an intended behavior. In computer science, the concept of persuasion has evolved with similar goals. Both concepts, nudging and persuasion, can contribute to IS research and may help to explain and steer user behavior in information systems. We aim for an integration of both concepts into one digital nudging process model, making it usable and accessible. We analyzed literature on nudging and persuasion and derived different steps, requirements, and nudging elements. The developed process model aims at enabling researchers and practitioners to design nudges in e.g. software systems but may also contribute to other areas like IT governance. Though the evaluation part of our study has not yet been completed, we present the current state of the process model enabling more research in this area.",
"title": ""
},
{
"docid": "2bc481a072f59d244eee80bdcc6eafb4",
"text": "This paper presents a soft switching DC/DC converter for high voltage application. The interleaved pulse-width modulation (PWM) scheme is used to reduce the ripple current at the output capacitor and the size of output inductors. Two converter cells are connected in series at the high voltage side to reduce the voltage stresses of the active switches. Thus, the voltage stress of each switch is clamped at one half of the input voltage. On the other hand, the output sides of two converter cells are connected in parallel to achieve the load current sharing and reduce the current stress of output inductors. In each converter cell, a half-bridge converter with the asymmetrical PWM scheme is adopted to control power switches and to regulate the output voltage at a desired voltage level. Based on the resonant behavior by the output capacitance of power switches and the transformer leakage inductance, active switches can be turned on at zero voltage switching (ZVS) during the transition interval. Thus, the switching losses of power MOSFETs are reduced. The current doubler rectifier is used at the secondary side to partially cancel ripple current. Therefore, the root-mean-square (rms) current at output capacitor is reduced. The proposed converter can be applied for high input voltage applications such as a three-phase 380V utility system. Finally, experiments based on a laboratory prototype with 960W (24V/40A) rated power are provided to demonstrate the performance of proposed converter.",
"title": ""
},
{
"docid": "2a77d3750d35fd9fec52514739303812",
"text": "We present a framework for analyzing and computing motion plans for a robot that operates in an environment that both varies over time and is not completely predictable. We rst classify sources of uncertainty in motion planning into four categories, and argue that the problems addressed in this paper belong to a fundamental category that has received little attention. We treat the changing environment in a exible manner by combining traditional connguration space concepts with a Markov process that models the environment. For this context, we then propose the use of a motion strategy, which provides a motion command for the robot for each contingency that it could be confronted with. We allow the speciication of a desired performance criterion, such as time or distance, and determine a motion strategy that is optimal with respect to that criterion. We demonstrate the breadth of our framework by applying it to a variety of motion planning problems. Examples are computed for problems that involve a changing conng-uration space, hazardous regions and shelters, and processing of random service requests. To achieve this, we have exploited the powerful principle of optimality, which leads to a dynamic programming-based algorithm for determining optimal strategies. In addition, we present several extensions to the basic framework that incorporate additional concerns, such as sensing issues or changes in the geometry of the robot.",
"title": ""
},
{
"docid": "e1d8ec65a2917792c186cbc125a99368",
"text": "In recent years, artificial intelligence has made a significant breakthrough and progress in the field of humanmachine conversation. However, how to generate high-quality, emotional and subhuman conversation still a troublesome work. The key factor of man-machine dialogue is whether the chatbot can give a good response in content and emotional level. How to ensure that the robot understands the user’s emotions, and consider the user’s emotions then give a satisfactory response. In this paper, we add the emotional tags to the post and response from the dataset respectively. The emotional tags, as the emotional tags of post and response, represent the emotions expressed by this sentence. The purpose of our emotional tags is to make the chatbot understood the emotion of the input sequence more directly so that it has a recognition of the emotional dimension. In this paper, we apply the mechanism of GAN network on our conversation model. For the generator: We make full use of Encoder-Decoder structure form a seq2seq model, which is used to generate a sentence’s response. For the discriminator: distinguish between the human-generated dialogues and the machine-generated ones.The outputs from the discriminator are used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. We cast our task as an RL(Reinforcement Learning) problem, using a policy gradient method to reward more subhuman conversational sequences, and in addition we have added an emotion tags to represent the response we want to get, which we will use as a rewarding part of it, so that the emotions of real responses can be closer to the emotions we specify. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion, which can be used to control and adjust users emotion. Compared with our previous work, we get a better performance on the same data set, and we get less ’’safe’’ response than before, but there will be a certain degree of existence.",
"title": ""
},
{
"docid": "35c18e570a6ab44090c1997e7fe9f1b4",
"text": "Online information maintenance through cloud applications allows users to store, manage, control and share their information with other users as well as Cloud service providers. There have been serious privacy concerns about outsourcing user information to cloud servers. But also due to an increasing number of cloud data security incidents happened in recent years. Proposed system is a privacy-preserving system using Attribute based Multifactor Authentication. Proposed system provides privacy to users data with efficient authentication and store them on cloud servers such that servers do not have access to sensitive user information. Meanwhile users can maintain full control over access to their uploaded ?les and data, by assigning ?ne-grained, attribute-based access privileges to selected files and data, while di?erent users can have access to di?erent parts of the System. This application allows clients to set privileges to different users to access their data.",
"title": ""
},
{
"docid": "8e41777b3af68a6ffc58dee361b07221",
"text": "An a-IGZO thin-film phototransistor incorporating graphene absorption layer was proposed to enhance the responsivity and sensitivity simultaneously for photodetection from ultraviolet to visible regime. The spin-coated graphene dots absorb incident light, transferring electrons to the underlying a-IGZO to establish a photochannel. The 5 A/W responsivity and 1000 photo-to-dark current ratio were achieved for graphene phototransistor at 500 nm. As compared with <;1% absorption, the graphene phototransistor indicates a >2700 transistor gain. The highest responsivity and photo-to-dark current ratio is 897 A/W and 106, respectively, under 340-nm light illumination.",
"title": ""
},
{
"docid": "71da7722f6ce892261134bd60ca93ab7",
"text": "Semantically annotated data, using markup languages like RDFa and Microdata, has become more and more publicly available in the Web, especially in the area of e-commerce. Thus, a large amount of structured product descriptions are freely available and can be used for various applications, such as product search or recommendation. However, little efforts have been made to analyze the categories of the available product descriptions. Although some products have an explicit category assigned, the categorization schemes vary a lot, as the products originate from thousands of different sites. This heterogeneity makes the use of supervised methods, which have been proposed by most previous works, hard to apply. Therefore, in this paper, we explain how distantly supervised approaches can be used to exploit the heterogeneous category information in order to map the products to set of target categories from an existing product catalogue. Our results show that, even though this task is by far not trivial, we can reach almost 56% accuracy for classifying products into 37 categories.",
"title": ""
},
{
"docid": "72fa771855a178d8901d29c72acf5300",
"text": "Aspect extraction identifies relevant features of an entity from a textual description and is typically targeted to product reviews, and other types of short text, as an enabling task for, e.g., opinion mining and information retrieval. Current aspect extraction methods mostly focus on aspect terms, often neglecting associated modifiers or embedding them in the aspect terms without proper distinction. Moreover, flat syntactic structures are often assumed, resulting in inaccurate extractions of complex aspects. This paper studies the problem of structured aspect extraction, a variant of traditional aspect extraction aiming at a fine-grained extraction of complex (i.e., hierarchical) aspects. We propose an unsupervised and scalable method for structured aspect extraction consisting of statistical noun phrase clustering, cPMI-based noun phrase segmentation, and hierarchical pattern induction. Our evaluation shows a substantial improvement over existing methods in terms of both quality and computational efficiency.",
"title": ""
},
{
"docid": "f6cb3ee09942c03bd0f89520a76cac39",
"text": "This paper proposes a high-performance transformerless single-stage high step-up ac-dc matrix converter based on Cockcroft-Walton (CW) voltage multiplier. Deploying a four-bidirectional-switch matrix converter between the ac source and CW circuit, the proposed converter provides high quality of line conditions, adjustable output voltage, and low output ripple. The matrix converter is operated with two independent frequencies. One of which is associated with power factor correction (PFC) control, and the other is used to set the output frequency of the matrix converter. Moreover, the relationship among the latter frequency, line frequency, and output ripple will be discussed. This paper adopts one-cycle control method to achieve PFC, and a commercial control IC associating with a preprogrammed complex programmable logic device is built as the system controller. The operation principle, control strategy, and design considerations of the proposed converter are all detailed in this paper. A 1.2-kV/500-W laboratory prototype of the proposed converter is built for test, measurement, and evaluation. At full-load condition, the measured power factor, the system efficiency, and the output ripple factor are 99.9%, 90.3%, and 0.3%, respectively. The experimental results demonstrate the high performance of the proposed converter and the validity for high step-up ac-dc applications.",
"title": ""
},
{
"docid": "031562142f7a2ffc64156f9d09865604",
"text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.",
"title": ""
},
{
"docid": "c89b903e497ebe8e8d89e8d1d931fae1",
"text": "Artificial neural networks (ANNs) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. However, despite all advantages cited for artificial neural networks, their performance for some real time series is not satisfactory. Improving forecasting especially time series forecasting accuracy is an important yet often difficult task facing forecasters. Both theoretical and empirical findings have indicated that integration of different models can be an effective way of improving upon their predictive performance, especially when the models in the ensemble are quite different. In this paper, a novel hybrid model of artificial neural networks is proposed using auto-regressive integrated moving average (ARIMA) models in order to yield a more accurate forecasting model than artificial neural networks. The empirical results with three well-known real data sets indicate that the proposed model can be an effective way to improve forecasting accuracy achieved by artificial neural networks. Therefore, it can be used as an appropriate alternative model for forecasting task, especially when higher forecasting accuracy is needed. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7e5e3309063053636f8cdcb64c70f413",
"text": "The variability in the service level agreements (SLAs) of cloud providers prompted us to ask the question how do the SLAs compare and how should the SLAs be defined for future cloud services. We break down a cloud SLA into easy to understand components and use it to compare SLAs of public cloud providers. Our study indicates that none of the surveyed cloud providers offer any performance guarantees for compute services and leave SLA violation detection to the customer. We then provide guidance on how SLAs should be defined for future cloud services.",
"title": ""
},
{
"docid": "d80a58ef393c1f311a829190d7981853",
"text": "With the increasing numbers of Cloud Service Providers and the migration of the Grids to the Cloud paradigm, it is necessary to be able to leverage these new resources. Moreover, a large class of High Performance Computing (hpc) applications can run these resources without (or with minor) modifications. But using these resources come with the cost of being able to interact with these new resource providers. In this paper we introduce the design of a hpc middleware that is able to use resources coming from an environment that compose of multiple Clouds as well as classical hpc resources. Using the Diet middleware, we are able to deploy a large-scale, distributed hpc platform that spans across a large pool of resources aggregated from different providers. Furthermore, we hide to the end users the difficulty and complexity of selecting and using these new resources even when new Cloud Service Providers are added to the pool. Finally, we validate the architecture concept through cosmological simulation ramses. Thus we give a comparison of 2 well-known Cloud Computing Software: OpenStack and OpenNebula. Key-words: Cloud, IaaS, OpenNebula, Multi-Clouds, DIET, OpenStack, RAMSES, cosmology ∗ ENS de Lyon, France, Email: [email protected] † ENSI de Bourges, France, Email: [email protected] ‡ INRIA, France, Email: [email protected] Comparison on OpenStack and OpenNebula performance to improve multi-Cloud architecture on cosmological simulation use case Résumé : Avec l’augmentation du nombre de fournisseurs de service Cloud et la migration des applications depuis les grilles de calcul vers le Cloud, il est ncessaire de pouvoir tirer parti de ces nouvelles ressources. De plus, une large classe des applications de calcul haute performance peuvent s’excuter sur ces ressources sans modifications (ou avec des modifications mineures). Mais utiliser ces ressources vient avec le cot d’tre capable d’intragir avec des nouveaux fournisseurs de ressources. Dans ce papier, nous introduisons la conception d’un nouveau intergiciel hpc qui permet d’utiliser les ressources qui proviennent d’un environement compos de plusieurs Clouds comme des ressources classiques. En utilisant l’intergiciel Diet, nous sommes capable de dployer une plateforme hpc distribue et large chelle qui s’tend sur un large ensemble de ressources aggrges entre plusieurs fournisseurs Cloud. De plus, nous cachons l’utilisateur final la difficult et la complexit de slectionner et d’utiliser ces nouvelles ressources quand un nouveau fournisseur de service Cloud est ajout dans l’ensemble. Finalement, nous validons notre concept d’architecture via une application de simulation cosmologique ramses. Et nous fournissons une comparaison entre 2 intergiciels de Cloud: OpenStack et OpenNebula. Mots-clés : Cloud, IaaS, OpenNebula, Multi-Clouds, DIET, OpenStack, RAMSES, cosmologie Comparaison de performance entre OpenStack et OpenNebula et les architectures multi-Cloud: Application la cosmologie.3",
"title": ""
},
{
"docid": "447b61671cf5e6762e56ab5561983842",
"text": "Biological phosphorous (P) and nitrogen (N) removal from municipal wastewater was studied using an innovative anoxic-aerobic-anaerobic side-stream treatment system. The impact of influent water quality including chemical oxygen demand (COD), ammonium and orthophosphate concentrations on the reactor performance was evaluated. The results showed the system was very effective at removing both COD (>88%) and NH4+-N (>96%) despite varying influent concentrations of COD, NH4+-N, and total PO43--P. In contrast, it was found that the removal of P was sensitive to influent NH4+-N and PO43--P concentrations. The maximum PO43--P removal of 79% was achieved with the lowest influent NH4+-N and PO43--P concentration. Quantitative PCR (qPCR) assays showed a high abundance and diversity of phosphate accumulating organisms (PAO), nitrifiers and denitrifiers. The MiSeq microbial community structure analysis showed that the Proteobacteria (especially β-Proteobacteria, and γ-Proteobacteria) were the dominant in all reactors. Further analysis of the bacteria indicated the presence of diverse PAO genera including Candidatus Accumulibacter phosphatis, Tetrasphaera, and Rhodocyclus, and the denitrifying PAO (DPAO) genus Dechloromonas. Interestingly, no glycogen accumulating organisms (GAOs) were detected in any of the reactors, suggesting the advantage of proposed process in term of PAO selection for enhanced P removal compared with conventional main-stream processes.",
"title": ""
},
{
"docid": "a5643b43ac72594500ac1232303946d1",
"text": "Biodegradable plastics are those that can be completely degraded in landfills, composters or sewage treatment plants by the action of naturally occurring micro-organisms. Truly biodegradable plastics leave no toxic, visible or distinguishable residues following degradation. Their biodegradability contrasts sharply with most petroleum-based plastics, which are essentially indestructible in a biological context. Because of the ubiquitous use of petroleum-based plastics, their persistence in the environment and their fossil-fuel derivation, alternatives to these traditional plastics are being explored. Issues surrounding waste management of traditional and biodegradable polymers are discussed in the context of reducing environmental pressures and carbon footprints. The main thrust of the present review addresses the development of plant-based biodegradable polymers. Plants naturally produce numerous polymers, including rubber, starch, cellulose and storage proteins, all of which have been exploited for biodegradable plastic production. Bacterial bioreactors fed with renewable resources from plants--so-called 'white biotechnology'--have also been successful in producing biodegradable polymers. In addition to these methods of exploiting plant materials for biodegradable polymer production, the present review also addresses the advances in synthesizing novel polymers within transgenic plants, especially those in the polyhydroxyalkanoate class. Although there is a stigma associated with transgenic plants, especially food crops, plant-based biodegradable polymers, produced as value-added co-products, or, from marginal land (non-food), crops such as switchgrass (Panicum virgatum L.), have the potential to become viable alternatives to petroleum-based plastics and an environmentally benign and carbon-neutral source of polymers.",
"title": ""
},
{
"docid": "5a4e27a64c9e73b9259768c7375483b1",
"text": "I. I. Fishchuk,1 A. K. Kadashchuk,2,3 J. Genoe,2 Mujeeb Ullah,4 H. Sitter,4 Th. B. Singh,5 N. S. Sariciftci,6 and H. Bässler7 1Institute for Nuclear Research, National Academy of Sciences of Ukraine, Prospect Nauky 47, 03680 Kyiv, Ukraine 2IMEC, Kapeldreef 75, Heverlee, B-3001 Leuven, Belgium 3Institute of Physics, National Academy of Sciences of Ukraine, Prospect Nauky 46, 03028 Kyiv, Ukraine 4Institute of Semiconductor & Solid State Physics, Johannes Kepler University of Linz, A-4040 Linz, Austria 5Molecular and Health Technologies, CSIRO, Bayview Avenue Clayton, Victoria 3168, Australia 6Linz Institute for Organic Solar Cells (LIOS), Johannes Kepler University of Linz, A-4040 Linz, Austria 7Chemistry Department, Philipps-Universität Marburg, Hans-Meerwein-Strasse, D-35032 Marburg, Germany Received 18 May 2009; revised manuscript received 31 August 2009; published 8 January 2010",
"title": ""
},
{
"docid": "e89db5214e5bea32b37539471fccb226",
"text": "In this paper, we survey the basic paradigms and notions of secure multiparty computation and discuss their relevance to the field of privacy-preserving data mining. In addition to reviewing definitions and constructions for secure multiparty computation, we discuss the issue of efficiency and demonstrate the difficulties involved in constructing highly efficient protocols. We also present common errors that are prevalent in the literature when secure multiparty computation techniques are applied to privacy-preserving data mining. Finally, we discuss the relationship between secure multiparty computation and privacy-preserving data mining, and show which problems it solves and which problems it does not.",
"title": ""
},
{
"docid": "a1c50803e3fb6f1dfa4106cb8263c42f",
"text": "DeepMind’s recent spectacular success in using deep convolutional neural nets and machine learning to build superhuman level agents — e.g. for Atari games via deep Q-learning and for the game of Go via other deep Reinforcement Learning methods — raises many questions, including to what extent these methods will succeed in other domains. In this paper we consider DQL for the game of Hex: after supervised initializing, we use self-play to train NeuroHex, an 11-layer CNN that plays Hex on the 13×13 board. Hex is the classic two-player alternate-turn stone placement game played on a rhombus of hexagonal cells in which the winner is whomever connects their two opposing sides. Despite the large action and state space, our system trains a Q-network capable of strong play with no search. After two weeks of Q-learning, NeuroHex achieves respective win-rates of 20.4% as first player and 2.1% as second player against a 1-second/move version of MoHex, the current ICGA Olympiad Hex champion. Our data suggests further improvement might be possible with more training time. 1 Motivation, Introduction, Background 1.1 Motivation DeepMind’s recent spectacular success in using deep convolutional neural nets and machine learning to build superhuman level agents — e.g. for Atari games via deep Q-learning and for the game of Go via other deep Reinforcement Learning methods — raises many questions, including to what extent these methods will succeed in other domains. Motivated by this success, we explore whether DQL can work to build a strong network for the game of Hex. 1.2 The Game of Hex Hex is the classic two-player connection game played on an n×n rhombus of hexagonal cells. Each player is assigned two opposite sides of the board and a set of colored stones; in alternating turns, each player puts one of their stones on an empty cell; the winner is whomever joins their two sides with a contiguous chain of their stones. Draws are not possible (at most one player can have a winning chain, and if the game ends with the board full, then exactly one player will have such a chain), and for each n×n board there exists a winning strategy for the 1st player [7]. Solving — finding the win/loss value — arbitrary Hex positions is P-Space complete [11]. Despite its simple rules, Hex has deep tactics and strategy. Hex has served as a test bed for algorithms in artificial intelligence since Shannon and E.F. Moore built a resistance network to play the game [12]. Computers have solved all 9×9 1-move openings and two 10×10 1-move openings, and 11×11 and 13×13 Hex are games of the International Computer Games Association’s annual Computer Olympiad [8]. In this paper we consider Hex on the 13×13 board. (a) A Hex game in progress. Black wants to join top and bottom, White wants to join left and right. (b) A finished Hex game. Black wins. Fig. 1: The game of Hex. 1.3 Related Work The two works that inspire this paper are [10] and [13], both from Google DeepMind. [10] introduces Deep Q-learning with Experience Replay. Q-learning is a reinforcement learning (RL) algorithm that learns a mapping from states to action values by backing up action value estimates from subsequent states to improve those in previous states. In Deep Q-learning the mapping from states to action values is learned by a Deep Neural network. Experience replay extends standard Q-learning by storing agent experiences in a memory buffer and sampling from these experiences every time-step to perform updates. This algorithm achieved superhuman performance on several classic Atari games using only raw visual input. [13] introduces AlphaGo, a Go playing program that combines Monte Carlo tree search with convolutional neural networks: one guides the search (policy network), another evaluates position quality (value network). Deep reinforcement learning (RL) is used to train both the value and policy networks, which each take a representation of the gamestate as input. The policy network outputs a probability distribution over available moves indicating the likelihood of choosing each move. The value network outputs a single scalar value estimating",
"title": ""
},
{
"docid": "584b0548efb961363da0a122b894e72d",
"text": "The objective of this study (clinicaltrials.gov NCT01858376) was to determine the effect of oral supplementation of a standardized extract of Phyllanthus emblica (CAPROS(®)) on cardiovascular disease (CVD) risk factors in overweight adult human subjects from the US population. Overweight/Class-1 obese (body-mass index: 25-35) adult subjects received 500 mg of CAPROS supplement b.i.d for 12 weeks. The study design included two baseline visits followed by 12 weeks of supplementation and then 2 weeks of washout. At all visits, peripheral venous blood was collected in sodium citrate tubes. Lipid profile measurements demonstrated a significant decrease in calculated low-density lipoprotein cholesterol and total cholesterol/high-density lipoprotein following 12 weeks of CAPROS supplementation when compared to averaged baseline visits. Circulatory high-sensitivity C reactive protein (hs-CRP) levels were significantly decreased after 12 weeks of supplementation. In addition, both ADP- and collagen-induced platelet aggregation was significantly downregulated following 12 weeks of supplementation. Overall, the study suggests that oral CAPROS supplementation may provide beneficial effects in overweight/Class-1 obese adults by lowering multiple global CVD risk factors.",
"title": ""
}
] |
scidocsrr
|
769e6a5e53951e7bf713ca104e6b440f
|
Experimental demonstration of a 10BASE-T Ethernet visible light communications system using white phosphor light-emitting diodes
|
[
{
"docid": "c021904cff1cbef8ab62cc3fe0502a7e",
"text": "Light-emitting diodes (LEDs), which will be increasingly used in lighting technology, will also allow for distribution of broadband optical wireless signals. Visible-light communication (VLC) using white LEDs offers several advantages over the RF-based wireless systems, i.e., license-free spectrum, low power consumption, and higher privacy. Mostly, optical wireless can provide much higher data rates. In this paper, we demonstrate a VLC system based on a white LED for indoor broadband wireless access. After investigating the nonlinear effects of the LED and the power amplifier, a data rate of 1 Gb/s has been achieved at the standard illuminance level, by using an optimized discrete multitone modulation technique and adaptive bit- and power-loading algorithms. The bit-error ratio of the received data was $1.5\\cdot 10^{-3}$, which is within the limit of common forward error correction (FEC) coding. These results twice the highest capacity that had been previously obtained.",
"title": ""
}
] |
[
{
"docid": "011ff2d5995a46a686d9edb80f33b8ca",
"text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.",
"title": ""
},
{
"docid": "ba7cb71cf07765f915d548f2a01e7b98",
"text": "Existing data storage systems offer a wide range of functionalities to accommodate an equally diverse range of applications. However, new classes of applications have emerged, e.g., blockchain and collaborative analytics, featuring data versioning, fork semantics, tamper-evidence or any combination thereof. They present new opportunities for storage systems to efficiently support such applications by embedding the above requirements into the storage. In this paper, we present ForkBase, a storage engine designed for blockchain and forkable applications. By integrating core application properties into the storage, ForkBase not only delivers high performance but also reduces development effort. The storage manages multiversion data and supports two variants of fork semantics which enable different fork worklflows. ForkBase is fast and space efficient, due to a novel index class that supports efficient queries as well as effective detection of duplicate content across data objects, branches and versions. We demonstrate ForkBase’s performance using three applications: a blockchain platform, a wiki engine and a collaborative analytics application. We conduct extensive experimental evaluation against respective state-of-the-art solutions. The results show that ForkBase achieves superior performance while significantly lowering the development effort. PVLDB Reference Format: Sheng Wang, Tien Tuan Anh Dinh, Qian Lin, Zhongle Xie, Meihui Zhang, Qingchao Cai, Gang Chen, Beng Chin Ooi, Pingcheng Ruan. ForkBase: An Efficient Storage Engine for Blockchain and Forkable Applications. PVLDB, 11(10): 1137-1150, 2018. DOI: https://doi.org/10.14778/3231751.3231762",
"title": ""
},
{
"docid": "97da7e7b07775f58c86d26a2b714ba9f",
"text": "Nowadays, visual object recognition is one of the key applications for computer vision and deep learning techniques. With the recent development in mobile computing technology, many deep learning framework software support Personal Digital Assistant systems, i.e., smart phones or tablets, allowing developers to conceive innovative applications. In this work, we intend to employ such ICT strategies with the aim of supporting the tourism in an art city: for these reasons, we propose to provide tourists with a mobile application in order to better explore artistic heritage within an urban environment by using just their smartphone's camera. The software solution is based on Google TensorFlow, an innovative deep learning framework mainly designed for pattern recognition tasks. The paper presents our design choices and an early performance evaluation.",
"title": ""
},
{
"docid": "69561d0f42cf4aae73d4c97c1871739e",
"text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.",
"title": ""
},
{
"docid": "11e9bdfbdcc7718878c4a87c894964eb",
"text": "Detecting topics from Twitter streams has become an important task as it is used in various fields including natural disaster warning, users opinion assessment, and traffic prediction. In this article, we outline different types of topic detection techniques and evaluate their performance. We categorize the topic detection techniques into five categories which are clustering, frequent pattern mining, Exemplar-based, matrix factorization, and probabilistic models. For clustering techniques, we discuss and evaluate nine different techniques which are sequential k-means, spherical k-means, Kernel k-means, scalable Kernel k-means, incremental batch k-means, DBSCAN, spectral clustering, document pivot clustering, and Bngram. Moreover, for matrix factorization techniques, we analyze five different techniques which are sequential Latent Semantic Indexing (LSI), stochastic LSI, Alternating Least Squares (ALS), Rank-one Downdate (R1D), and Column Subset Selection (CSS). Additionally, we evaluate several other techniques in the frequent pattern mining, Exemplar-based, and probabilistic model categories. Results on three Twitter datasets show that Soft Frequent Pattern Mining (SFM) and Bngram achieve the best term precision, while CSS achieves the best term recall and topic recall in most of the cases. Moreover, Exemplar-based topic detection obtains a good balance between the term recall and term precision, while achieving a good topic recall and running time.",
"title": ""
},
{
"docid": "808a6c959eb79deb6ac5278805f5b855",
"text": "Recently there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l1-norm, average percentage of zeros, etc) and retain only the top ranked filters. Once the low scoring filters are pruned away the remainder of the network is fine tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counter-intuitive results wherein by randomly pruning 25-50% filters from deep CNNs we are able to obtain the same performance as obtained by using state of the art pruning methods. We empirically validate our claims by doing an exhaustive evaluation with VGG-16 and ResNet-50. Further, we also evaluate a real world scenario where a CNN trained on all 1000 ImageNet classes needs to be tested on only a small set of classes at test time (say, only animals). We create a new benchmark dataset from ImageNet to evaluate such class specific pruning and show that even here a random pruning strategy gives close to state of the art performance. Lastly, unlike existing approaches which mainly focus on the task of image classification, in this work we also report results on object detection. We show that using a simple random pruning strategy we can achieve significant speed up in object detection (74% improvement in fps) while retaining the same accuracy as that of the original Faster RCNN model.",
"title": ""
},
{
"docid": "bb1cea4fd4922b15b6aec98d43280b8c",
"text": "This report is on the design, control strategy, implementation, and performance evaluation of a novel leg–wheel transformable robot called TurboQuad, which can perform fast gait/mode coordination and transitions in wheeled mode, in legged trotting, and in legged walking while in motion. This functionality was achieved by including two novel setups in the robot that were not included in its predecessor, Quattroped. First, a new leg–wheel mechanism was used, in which the leg/wheel operation and its in situ transition can be driven by the same set of motors, so the actuation system and power can be utilized efficiently. Second, a bio-inspired control strategy was applied based on the central pattern generator and coupled oscillator networks, in which the gait/mode generation, coordination, and transitions can be integrally controlled. The robot was empirically built and its performances in the described three gaits/modes as well as the transitions among them were experimentally evaluated and will be discussed in this paper.",
"title": ""
},
{
"docid": "713f236908dbc26b7a01b176a90e679f",
"text": "Recent years have seen rapid development and deployment of Internet-of-Things (IoT) applications in a diversity of application domains. This has resulted in creation of new applications (e.g., vehicle networking, smart grid, and wearables) as well as advancement, consolidation, and transformation of various traditional domains (e.g., medical and automotive). One upshot of this scale and diversity of applications is the emergence of new and critical threats to security and privacy: it is getting increasingly easier for an adversary to break into an application, make it unusable, or steal sensitive information and data. This paper provides a summary of IoT security attacks and develops a taxonomy and classification based on the application domain and underlying system architecture. We also discuss some key characteristics of IoT that make it difficult to develop robust security architectures for IoT applications.",
"title": ""
},
{
"docid": "7c570bf4961adaa17e8cdd6d6b7e0f68",
"text": "This paper presents a 50-MHz 5-V-input 3-W-output three-level buck converter. A real-time flying capacitor (<inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>) calibration is proposed to ensure a constant voltage of <inline-formula> <tex-math notation=\"LaTeX\">$V_{g}$ </tex-math></inline-formula>/2 across <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>, which is highly dependent on various practical conditions, such as parasitic capacitance, time mismatches, or any loading circuits from <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>. The calibration is essential to ensure the reliability and minimize the inductor current and output voltage ripple, thus maintaining the advantages of the three-level operation and further extending the system bandwidth without encountering sub-harmonic oscillation. The converter is fabricated in a UMC 65-nm process using standard 2.5-V I/O devices, and is able to handle a 5-V input voltage and provide a 0.6–4.2-V-wide output range. In the measurement, the voltage across <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula> is always calibrated to <inline-formula> <tex-math notation=\"LaTeX\">$V_{g}$ </tex-math></inline-formula>/2 under various conditions to release the voltage stress on the high- and low-side power transistors and <inline-formula> <tex-math notation=\"LaTeX\">$C_{F}$ </tex-math></inline-formula>, and to ensure reliability with up to 69% output voltage ripple reduction. A 90% peak efficiency and a 23–29-ns/V reference-tracking response are also observed.",
"title": ""
},
{
"docid": "06104f7f43133230eb79b86c195e4206",
"text": "This paper describes the WiLI-2018 benchmark dataset for monolingual written natural language identification. WiLI-2018 is a publicly available,1 free of charge dataset of short text extracts from Wikipedia. It contains 1000 paragraphs of 235 languages, totaling in 235 000 paragraphs. WiLI is a classification dataset: Given an unknown paragraph written in one dominant language, it has to be decided which language it is.",
"title": ""
},
{
"docid": "bb49674d0a1f36e318d27525b693e51d",
"text": "prevent attackers from gaining control of the system using well established techniques such as; perimeter-based fire walls, redundancy and replications, and encryption. However, given sufficient time and resources, all these methods can be defeated. Moving Target Defense (MTD), is a defensive strategy that aims to reduce the need to continuously fight against attacks by disrupting attackers gain-loss balance. We present Mayflies, a bio-inspired generic MTD framework for distributed systems on virtualized cloud platforms. The framework enables systems designed to defend against attacks for their entire runtime to systems that avoid attacks in time intervals. We discuss the design, algorithms and the implementation of the framework prototype. We illustrate the prototype with a quorum-based Byzantime Fault Tolerant system and report the preliminary results.",
"title": ""
},
{
"docid": "61282d5ef37e5821a5a856f0bbe26cc2",
"text": "Second language teachers are great consumers of grammar. They are mainly interested in pedagogical grammar, but they are generally unaware of the work of theoretical linguists, such as Chomsky and Halliday. Whereas Chomsky himself has never suggested in any way that his work might be of benefit to L2 teaching, Halliday and his many disciples, have. It seems odd that language teachers should choose to ignore the great gurus of grammar. Even if their work is deemed too technical and theoretical for classroom application, it may still shed light on pedagogical grammar and provide a rationale for the way one goes about teaching grammar. In order to make informed decisions about what grammar to teach and how best to teach it, one should take stock of the various schools of grammar that seem to speak in very different voices. In the article, the writer outlines the kinds of grammar that come out of five of these schools, and assesses their usefulness to the L2 teacher.",
"title": ""
},
{
"docid": "ad00ba810df4c7295b89640c64b50e51",
"text": "Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of participants made a commission error, the erroneous repetition of a PM response following intention completion. Comparisons between the PM groups and control group suggested that commission errors occurred in the absence of preparatory monitoring. Response time analyses additionally suggested that some participants experienced fatigue across the ongoing task block, and those who did were more susceptible to making a commission error. These results supported the hypothesis that commission errors can arise from the spontaneous retrieval of finished intentions and possibly the failure to exert executive control to oppose the PM response.",
"title": ""
},
{
"docid": "d6ebe4bacd4a9cea920cfb18aebd5f28",
"text": "Page Abstract ............................................................................................................2 Introduction ......................................................................................................2 Key MOSFET Electrical Parameters in Class D Audio Amplifiers ....................2 Drain Source Breakdown Voltage BVDSS................................................2 Static Drain-to-Source On Resistance RDS(on).........................................4 Gate Charge Qg......................................................................................5 Body Diode Reverse Recovery Charge, Qrr ...........................................8 Internal Gate Resistance RG(int)........................................................11 MOSFET Package ..................................................................................11 Maximum Junction Temperature .............................................................12 International Rectifier Digital Audio MOSFET ...................................................13 Conclusions.........................................................................................14 References........................................................................................................14",
"title": ""
},
{
"docid": "4d93be453dcb767faca082d966af5f3a",
"text": "This paper presents a unified variational formulation for joint object segmentation and stereo matching, which takes both accuracy and efficiency into account. In our approach, depth-map consists of compact objects, each object is represented through three different aspects: the perimeter in image space; the slanted object depth plane; and the planar bias, which is to add an additional level of detail on top of each object plane in order to model depth variations within an object. Compared with traditional high quality solving methods in low level, we use a convex formulation of the multilabel Potts Model with PatchMatch stereo techniques to generate depth-map at each image in object level and show that accurate multiple view reconstruction can be achieved with our formulation by means of induced homography without discretization or staircasing artifacts. Our model is formulated as an energy minimization that is optimized via a fast primal-dual algorithm, which can handle several hundred object depth segments efficiently. Performance evaluations in the Middlebury benchmark data sets show that our method outperforms the traditional integer-valued disparity strategy as well as the original PatchMatch algorithm and its variants in subpixel accurate disparity estimation. The proposed algorithm is also evaluated and shown to produce consistently good results for various real-world data sets (KITTI benchmark data sets and multiview benchmark data sets).",
"title": ""
},
{
"docid": "0ebcd0c087454a9812ee54a0cd71a1a9",
"text": "In this paper, we present the Smart City Architecture developed in the context of the ARTEMIS JU SP3 SOFIA project. It is an Event Driven Architecture that allows the management and cooperation of heterogeneous sensors for monitoring public spaces. The main components of the architecture are implemented in a testbed on a subway scenario with the objective to demonstrate that our proposed solution, can enhance the detection of anomalous events and simplify both the operators tasks and the communications to passengers in case of emergency.",
"title": ""
},
{
"docid": "8ad0cd1f03db395a9918bbdfdf9a3268",
"text": "Commercial anti-virus software are unable to provide protection against newly launched (a.k.a \"zero-day\") malware. In this paper, we propose a novel malware detection technique which is based on the analysis of byte-level file content. The novelty of our approach, compared with existing content based mining schemes, is that it does not memorize specific byte-sequences or strings appearing in the actual file content. Our technique is non-signature based and therefore has the potential to detect previously unknown and zero-day malware. We compute a wide range of statistical and information-theoretic features in a block-wise manner to quantify the byte-level file content. We leverage standard data mining algorithms to classify the file content of every block as normal or potentially malicious. Finally, we correlate the block-wise classification results of a given file to categorize it as benign or malware. Since the proposed scheme operates at the byte-level file content; therefore, it does not require any a priori information about the filetype. We have tested our proposed technique using a benign dataset comprising of six different filetypes --- DOC, EXE, JPG, MP3, PDF and ZIP and a malware dataset comprising of six different malware types --- backdoor, trojan, virus, worm, constructor and miscellaneous. We also perform a comparison with existing data mining based malware detection techniques. The results of our experiments show that the proposed nonsignature based technique surpasses the existing techniques and achieves more than 90% detection accuracy.",
"title": ""
},
{
"docid": "80f31bb04f4714d7a14499d5d97be8da",
"text": "We investigate the importance of text analysis for stock price prediction. In particular, we introduce a system that forecasts companies’ stock price changes (UP, DOWN, STAY) in response to financial events reported in 8-K documents. Our results indicate that using text boosts prediction accuracy over 10% (relative) over a strong baseline that incorporates many financially-rooted features. This impact is most important in the short term (i.e., the next day after the financial event) but persists for up to five days.",
"title": ""
},
{
"docid": "9d7a67f2cd12a6fd033ad102fb9c526e",
"text": "We begin by pretraining the source task model, fS , using the task loss on the labeled source data. Next, we perform pixel-level adaptation using our image space GAN losses together with semantic consistency and cycle consistency losses. This yeilds learned parameters for the image transformations, GS!T and GT!S , image discriminators, DS and DT , as well as an initial setting of the task model, fT , which is trained using pixel transformed source images and the corresponding source pixel labels. Finally, we perform feature space adpatation in order to update the target semantic model, fT , to have features which are aligned between the source images mapped into target style and the real target images. During this phase, we learn the feature discriminator, Dfeat and use this to guide the representation update to fT . In general, our method could also perform phases 2 and 3 simultaneously, but this would require more GPU memory then available at the time of these experiments.",
"title": ""
},
{
"docid": "90084e7b31e89f5eb169a0824dde993b",
"text": "In this work, we present a novel way of using neural network for graph-based dependency parsing, which fits the neural network into a simple probabilistic model and can be furthermore generalized to high-order parsing. Instead of the sparse features used in traditional methods, we utilize distributed dense feature representations for neural network, which give better feature representations. The proposed parsers are evaluated on English and Chinese Penn Treebanks. Compared to existing work, our parsers give competitive performance with much more efficient inference.",
"title": ""
}
] |
scidocsrr
|
b6b88b7123fa795c2b85667f7c43274c
|
Augur: Mining Human Behaviors from Fiction to Power Interactive Systems
|
[
{
"docid": "4bce532be92d68a39dd07b6f3e799721",
"text": "Most so-called “errors” in probabilistic reasoning are in fact not violations of probability theory. Examples of such “errors” include overconfi dence bias, conjunction fallacy, and base-rate neglect. Researchers have relied on a very narrow normative view, and have ignored conceptual distinctions—for example, single case versus relative frequency—fundamental to probability theory. By recognizing and using these distinctions, however, we can make apparently stable “errors” disappear, reappear, or even invert. I suggest what a reformed understanding of judgments under uncertainty might look like.",
"title": ""
},
{
"docid": "3f06fc0b50a1de5efd7682b4ae9f5a46",
"text": "We present ShadowDraw, a system for guiding the freeform drawing of objects. As the user draws, ShadowDraw dynamically updates a shadow image underlying the user's strokes. The shadows are suggestive of object contours that guide the user as they continue drawing. This paradigm is similar to tracing, with two major differences. First, we do not provide a single image from which the user can trace; rather ShadowDraw automatically blends relevant images from a large database to construct the shadows. Second, the system dynamically adapts to the user's drawings in real-time and produces suggestions accordingly. ShadowDraw works by efficiently matching local edge patches between the query, constructed from the current drawing, and a database of images. A hashing technique enforces both local and global similarity and provides sufficient speed for interactive feedback. Shadows are created by aggregating the edge maps from the best database matches, spatially weighted by their match scores. We test our approach with human subjects and show comparisons between the drawings that were produced with and without the system. The results show that our system produces more realistically proportioned line drawings.",
"title": ""
},
{
"docid": "7ac64c2f890717a9f2b7b440da6a68ce",
"text": "We introduce the Rel-grams language model, which is analogous to an n-grams model, but is computed over relations rather than over words. The model encodes the conditional probability of observing a relational tuple R, given that R′ was observed in a window of prior relational tuples. We build a database of Rel-grams co-occurence statistics from ReVerb extractions over 1.8M news wire documents and show that a graphical model based on these statistics is useful for automatically discovering event templates. We make this database freely available and hope it will prove a useful resource for a wide variety of NLP tasks.",
"title": ""
}
] |
[
{
"docid": "242030243133cd57d6cc62be154fd6ec",
"text": "| The inverse kinematics of serial manipulators is a central problem in the automatic control of robot manipula-tors. The main interest has been in inverse kinematics of a six revolute (6R) jointed manipulator with arbitrary geometry. It has been recently shown that the joints of a general 6R manipulator can orient themselves in 16 diierent con-gurations (at most), for a given pose of the end{eeector. However, there are no good practical solutions available, which give a level of performance expected of industrial ma-nipulators. In this paper, we present an algorithm and implementation for eecient inverse kinematics for a general 6R manipulator. When stated mathematically, the problem reduces to solving a system of multivariate equations. We make use of the algebraic properties of the system and the symbolic formulation used for reducing the problem to solving a univariate polynomial. However, the polynomial is expressed as a matrix determinant and its roots are computed by reducing to an eigenvalue problem. The other roots of the multivariate system are obtained by computing eigenvectors and substitution. The algorithm involves symbolic preprocessing, matrix computations and a variety of other numerical techniques. The average running time of the algorithm, for most cases, is 11 milliseconds on an IBM RS/6000 workstation. This approach is applicable to inverse kinematics of all serial manipulators.",
"title": ""
},
{
"docid": "88077fe7ce2ad4a3c3052a988f9f96c1",
"text": "When collecting patient-level resource use data for statistical analysis, for some patients and in some categories of resource use, the required count will not be observed. Although this problem must arise in most reported economic evaluations containing patient-level data, it is rare for authors to detail how the problem was overcome. Statistical packages may default to handling missing data through a so-called 'complete case analysis', while some recent cost-analyses have appeared to favour an 'available case' approach. Both of these methods are problematic: complete case analysis is inefficient and is likely to be biased; available case analysis, by employing different numbers of observations for each resource use item, generates severe problems for standard statistical inference. Instead we explore imputation methods for generating 'replacement' values for missing data that will permit complete case analysis using the whole data set and we illustrate these methods using two data sets that had incomplete resource use information.",
"title": ""
},
{
"docid": "b22b0a553971d9d81a8196f40f97255c",
"text": "Latent fingerprints are routinely found at crime scenes due to the inadvertent contact of the criminals' finger tips with various objects. As such, they have been used as crucial evidence for identifying and convicting criminals by law enforcement agencies. However, compared to plain and rolled prints, latent fingerprints usually have poor quality of ridge impressions with small fingerprint area, and contain large overlap between the foreground area (friction ridge pattern) and structured or random noise in the background. Accordingly, latent fingerprint segmentation is a difficult problem. In this paper, we propose a latent fingerprint segmentation algorithm whose goal is to separate the fingerprint region (region of interest) from background. Our algorithm utilizes both ridge orientation and frequency features. The orientation tensor is used to obtain the symmetric patterns of fingerprint ridge orientation, and local Fourier analysis method is used to estimate the local ridge frequency of the latent fingerprint. Candidate fingerprint (foreground) regions are obtained for each feature type; an intersection of regions from orientation and frequency features localizes the true latent fingerprint regions. To verify the viability of the proposed segmentation algorithm, we evaluated the segmentation results in two aspects: a comparison with the ground truth foreground and matching performance based on segmented region.",
"title": ""
},
{
"docid": "4d2666a8aa228041895a631a83236780",
"text": "Dermoscopy is a method of increasing importance in the diagnoses of cutaneous diseases. On the scalp, dermoscopic aspects have been described in psoriasis, lichen planus, seborrheic dermatitis and discoid lupus. We describe the \"comma\" and \"corkscrew hair\" dermoscopic aspects found in a child of skin type 4, with tinea capitis.",
"title": ""
},
{
"docid": "337afded77b22d4e1460569c561cad1a",
"text": "The mammalian hippocampus is critical for spatial information processing and episodic memory. Its primary output cells, CA1 pyramidal cells (CA1 PCs), vary in genetics, morphology, connectivity, and electrophysiological properties. It is therefore possible that distinct CA1 PC subpopulations encode different features of the environment and differentially contribute to learning. To test this hypothesis, we optically monitored activity in deep and superficial CA1 PCs segregated along the radial axis of the mouse hippocampus and assessed the relationship between sublayer dynamics and learning. Superficial place maps were more stable than deep during head-fixed exploration. Deep maps, however, were preferentially stabilized during goal-oriented learning, and representation of the reward zone by deep cells predicted task performance. These findings demonstrate that superficial CA1 PCs provide a more stable map of an environment, while their counterparts in the deep sublayer provide a more flexible representation that is shaped by learning about salient features in the environment. VIDEO ABSTRACT.",
"title": ""
},
{
"docid": "4b3425ce40e46b7a595d389d61daca06",
"text": "Genetic or acquired destabilization of the dermal extracellular matrix evokes injury- and inflammation-driven progressive soft tissue fibrosis. Dystrophic epidermolysis bullosa (DEB), a heritable human skin fragility disorder, is a paradigmatic disease to investigate these processes. Studies of DEB have generated abundant new information on cellular and molecular mechanisms at play in skin fibrosis which are not only limited to intractable diseases, but also applicable to some of the most common acquired conditions. Here, we discuss recent advances in understanding the biological and mechanical mechanisms driving the dermal fibrosis in DEB. Much of this progress is owed to the implementation of cell and tissue omics studies, which we pay special attention to. Based on the novel findings and increased understanding of the disease mechanisms in DEB, translational aspects and future therapeutic perspectives are emerging.",
"title": ""
},
{
"docid": "7e68fe5b6a164359d2389f30686ec049",
"text": "Tracking the articulated 3D motion of the hand has important applications, for example, in human-computer interaction and teleoperation. We present a novel method that can capture a broad range of articulated hand motions at interactive rates. Our hybrid approach combines, in a voting scheme, a discriminative, part-based pose retrieval method with a generative pose estimation method based on local optimization. Color information from a multi-view RGB camera setup along with a person-specific hand model are used by the generative method to find the pose that best explains the observed images. In parallel, our discriminative pose estimation method uses fingertips detected on depth data to estimate a complete or partial pose of the hand by adopting a part-based pose retrieval strategy. This part-based strategy helps reduce the search space drastically in comparison to a global pose retrieval strategy. Quantitative results show that our method achieves state-of-the-art accuracy on challenging sequences and a near-real time performance of 10 fps on a desktop computer.",
"title": ""
},
{
"docid": "3bdd30d2c6e63f2e5540757f1db878b6",
"text": "The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disruption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users’ engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of 4,709 intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users’ aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information. ∗Corresponding author General Terms Misinformation, Virality, Attention Patterns",
"title": ""
},
{
"docid": "f9824ae0b73ebecf4b3a893392e77d67",
"text": "This paper proposes genetic algorithms (GAs) approach to feature discretization and the determination of connection weights for artificial neural networks (ANNs) to predict the stock price index. Previous research proposed many hybrid models of ANN and GA for the method of training the network, feature subset selection, and topology optimization. In most of these studies, however, GA is only used to improve the learning algorithm itself. In this study, GA is employed not only to improve the learning algorithm, but also to reduce the complexity in feature space. GA optimizes simultaneously the connection weights between layers and the thresholds for feature discretization. The genetically evolved weights mitigate the well-known limitations of the gradient descent algorithm. In addition, globally searched feature discretization reduces the dimensionality of the feature space and eliminates irrelevant factors. Experimental results show that GA approach to the feature discretization model outperforms the other two conventional models. q 2000 Published by Elsevier Science Ltd.",
"title": ""
},
{
"docid": "cfc4dc24378c5b7b83586db56fad2cac",
"text": "This study investigated the effects of proximal and distal constructs on adolescent's academic achievement through self-efficacy. Participants included 482 ninth- and tenth- grade Norwegian students who completed a questionnaire designed to assess school-goal orientations, organizational citizenship behavior, academic self-efficacy, and academic achievement. The results of a bootstrapping technique used to analyze relationships between the constructs indicated that school-goal orientations and organizational citizenship predicted academic self-efficacy. Furthermore, school-goal orientation, organizational citizenship, and academic self-efficacy explained 46% of the variance in academic achievement. Mediation analyses revealed that academic self-efficacy mediated the effects of perceived task goal structure, perceived ability structure, civic virtue, and sportsmanship on adolescents' academic achievements. The results are discussed in reference to current scholarship, including theories underlying our hypothesis. Practical implications and directions for future research are suggested.",
"title": ""
},
{
"docid": "4d73c50244d16dab6d3773dbeebbae98",
"text": "We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.",
"title": ""
},
{
"docid": "fd70fff204201c33ed3d901c48560980",
"text": "I n the early 1960s, the average American adult male weighed 168 pounds. Today, he weighs nearly 180 pounds. Over the same time period, the average female adult weight rose from 143 pounds to over 155 pounds (U.S. Department of Health and Human Services, 1977, 1996). In the early 1970s, 14 percent of the population was classified as medically obese. Today, obesity rates are two times higher (Centers for Disease Control, 2003). Weights have been rising in the United States throughout the twentieth century, but the rise in obesity since 1980 is fundamentally different from past changes. For most of the twentieth century, weights were below levels recommended for maximum longevity (Fogel, 1994), and the increase in weight represented an increase in health, not a decrease. Today, Americans are fatter than medical science recommends, and weights are still increasing. While many other countries have experienced significant increases in obesity, no other developed country is quite as heavy as the United States. What explains this growth in obesity? Why is obesity higher in the United States than in any other developed country? The available evidence suggests that calories expended have not changed significantly since 1980, while calories consumed have risen markedly. But these facts just push the puzzle back a step: why has there been an increase in calories consumed? We propose a theory based on the division of labor in food preparation. In the 1960s, the bulk of food preparation was done by families that cooked their own food and ate it at home. Since then, there has been a revolution in the mass preparation of food that is roughly comparable to the mass",
"title": ""
},
{
"docid": "935c404529b02cee2620e52f7a09b84d",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "aabae18789f9aab997ea7e1a92497de7",
"text": "We develop, in this paper, a representation of time and events that supports a range of reasoning tasks such as monitoring and detection of event patterns which may facilitate the explanation of root cause(s) of faults. We shall compare two approaches to event definition: the active database approach in which events are defined in terms of the conditions for their detection at an instant, and the knowledge representation approach in which events are defined in terms of the conditions for their occurrence over an interval. We shall show the shortcomings of the former definition and employ a three-valued temporal first order nonmonotonic logic, extended with events, in order to integrate both definitions.",
"title": ""
},
{
"docid": "c7f8e188b5768b5046d418f1953c4597",
"text": "Long Short-Term Memory networks (LSTMs) are a component of many state-of-the-art DNN-based speech recognition systems. Dropout is a popular method to improve generalization in DNN training. In this paper we describe extensive experiments in which we investigated the best way to combine dropout with LSTMs– specifically, projected LSTMs (LSTMP). We investigated various locations in the LSTM to place the dropout (and various combinations of locations), and a variety of dropout schedules. Our optimized recipe gives consistent improvements in WER across a range of datasets, including Switchboard, TED-LIUM and AMI.",
"title": ""
},
{
"docid": "a41821747271971221f6c8abc4797dd0",
"text": "This paper presents three power module cooling topologies that are being considered for use in electric traction drive vehicles such as a hybrid electric, plug-in hybrid electric, or electric vehicle. The impact on the fatigue life of solder joints for each cooling option is investigated along with the thermal performance. Considering solder joint reliability and thermal performance, topologies using indirect jet impingement look attractive.",
"title": ""
},
{
"docid": "e5eb79b313dad91de1144cd0098cde15",
"text": "Information Extraction aims to retrieve certain types of information from natural language text by processing them automatically. For example, an information extraction system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction has recently emerged as a subfield of information extraction. Here, ontologies which provide formal and explicit specifications of conceptualizations play a crucial role in the information extraction process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different ontology-based information extraction systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.",
"title": ""
},
{
"docid": "1fb40441cd6d439a0e024fd888de0b2d",
"text": "Purpose – The aim of this research is to define a data model of theses and dissertations that enables data exchange with CERIF-compatible CRIS systems and data exchange according to OAI-PMH protocol in different metadata formats (Dublin Core, EDT-MS, etc.). Design/methodology/approach – Various systems that contain metadata about theses and dissertations are analyzed. There are different standards and protocols that enable the interoperability of those systems: CERIF standard, AOI-PMH protocol, etc. A physical data model that enables interoperability with almost all of those systems is created using the PowerDesigner CASE tool. Findings – A set of metadata about theses and dissertations that contain all the metadata required by CERIF data model, Dublin Core format, EDT-MS format and all the metadata prescribed by the University of Novi Sad is defined. Defined metadata can be stored in the CERIF-compatible data model based on the MARC21 format. Practical implications – CRIS-UNS is a CRIS which has been developed at the University of Novi Sad since 2008. The system is based on the proposed data model, which enables the system’s interoperability with other CERIF-compatible CRIS systems. Also, the system based on the proposed model can become a member of NDLTD. Social implications – A system based on the proposed model increases the availability of theses and dissertations, and thus encourages the development of the knowledge-based society. Originality/value – A data model of theses and dissertations that enables interoperability with CERIF-compatible CRIS systems is proposed. A software system based on the proposed model could become a member of NDLTD and exchange metadata with institutional repositories. The proposed model increases the availability of theses and dissertations.",
"title": ""
},
{
"docid": "d99989724ed1b75a89a924a3aedb103f",
"text": "Two of the most popular and controversial cosmetic procedures for adolescents are liposuction and breast implants. In this review article, the procedures are discussed. In addition, the physiological and psychological reasons to delay these procedures, including concerns about body dysmorphic disorder and research findings regarding changes in teenagers' body image as they mature, are described. The lack of persuasive empirical research on the mental health benefits of plastic surgery for teenagers is highlighted. Finally, the long-term financial and health implications of implanted medical devices with a limited lifespan are presented. Adolescent medicine providers need to be involved in improving informed decision making for these procedures, aware of the absence of data on the health and mental health risks and benefits of these surgeries for teenagers, and understand the limitations on teenagers' abilities to evaluate risks.",
"title": ""
}
] |
scidocsrr
|
48261ccb2ec7c3702e637f1c0b460f47
|
Efficient approaches for escaping higher order saddle points in non-convex optimization
|
[
{
"docid": "181eafc11f3af016ca0926672bdb5a9d",
"text": "The conventional wisdom is that backprop nets with excess hi dden units generalize poorly. We show that nets with excess capacity ge neralize well when trained with backprop and early stopping. Experim nts suggest two reasons for this: 1) Overfitting can vary significant ly i different regions of the model. Excess capacity allows better fit to reg ions of high non-linearity, and backprop often avoids overfitting the re gions of low non-linearity. 2) Regardless of size, nets learn task subco mponents in similar sequence. Big nets pass through stages similar to th ose learned by smaller nets. Early stopping can stop training the large n et when it generalizes comparably to a smaller net. We also show that co njugate gradient can yield worse generalization because it overfits regions of low non-linearity when learning to fit regions of high non-linea rity.",
"title": ""
}
] |
[
{
"docid": "71244969f0e3a1f64c0f0286519c7998",
"text": "In present day scenario the security and authentication is very much needed to make a safety world. Beside all security one vital issue is recognition of number plate from the car for Authorization. In the busy world everything cannot be monitor by a human, so automatic license plate recognition is one of the best application for authorization without involvement of human power. In the proposed method we have make the problem into three fold, firstly extraction of number plate region, secondly segmentation of character and finally Authorization through recognition and classification. For number plate extraction and segmentation we have used morphological based approaches where as for classification we have used Neural Network as classifier. The proposed method is working well in varieties of scenario and the performance level is quiet good..",
"title": ""
},
{
"docid": "53dc606897bd6388c729cc8138027b31",
"text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.",
"title": ""
},
{
"docid": "2e1a6dfb1208bc09a227c7e16ffc7b4f",
"text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.",
"title": ""
},
{
"docid": "51446063ea738c3d06e80a5a362f795d",
"text": "This paper presents SpinLight, an indoor positioning system that uses infrared LED lamps as signal transmitters, and light sensors as receivers. The main idea is to divide the space into spatial beams originating from the light source, and identify each beam with a unique timed sequence of light signals. This sequence is created by a coded shade that covers and rotates around the LED, blocking the light or allowing it to pass through according to pre-defined patterns. The receiver, equipped with a light sensor, is able to determine its spatial beam by detecting the light signals, followed by optimization schemes to refine its location within that beam. We present both 2D and 3D localization designs, demonstrated by a prototype implementation. Experiments show that SpinLight produces a median location error of 3.8 cm, with a 95th percentile of 6.8 cm. The receiver design is very low power and thus can operate for months to years from a button coin battery.",
"title": ""
},
{
"docid": "97de6efcdba528f801cbfa087498ab3f",
"text": "Abstract: Educational Data Mining refers to techniques, tools, and research designed for automatically extracting meaning from large repositories of data generated by or related to people' learning activities in educational settings.[1] It is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from educational settings, and using those methods to better understand students, and the settings which they learn in.[2]",
"title": ""
},
{
"docid": "443a4fe9e7484a18aa53a4b142d93956",
"text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.",
"title": ""
},
{
"docid": "f2c8af1f4bcf7115fc671ae9922adbb3",
"text": "Extracting insights from temporal event sequences is an important challenge. In particular, mining frequent patterns from event sequences is a desired capability for many domains. However, most techniques for mining frequent patterns are ineffective for real-world data that may be low-resolution, concurrent, or feature many types of events, or the algorithms may produce results too complex to interpret. To address these challenges, we propose Frequence, an intelligent user interface that integrates data mining and visualization in an interactive hierarchical information exploration system for finding frequent patterns from longitudinal event sequences. Frequence features a novel frequent sequence mining algorithm to handle multiple levels-of-detail, temporal context, concurrency, and outcome analysis. Frequence also features a visual interface designed to support insights, and support exploration of patterns of the level-of-detail relevant to users. Frequence's effectiveness is demonstrated with two use cases: medical research mining event sequences from clinical records to understand the progression of a disease, and social network research using frequent sequences from Foursquare to understand the mobility of people in an urban environment.",
"title": ""
},
{
"docid": "d4aca467d0014b2c2359f5609a1a199b",
"text": "MATLAB is specifically designed for simulating dynamic systems. This paper describes a method of modelling impulse voltage generator using Simulink, an extension of MATLAB. The equations for modelling have been developed and a corresponding Simulink model has been constructed. It shows that Simulink program becomes very useful in studying the effect of parameter changes in the design to obtain the desired impulse voltages and waveshapes from an impulse generator.",
"title": ""
},
{
"docid": "34b4a91dac887d6d0c7387baae9fd0a2",
"text": "Robert Burns wrote: “The best laid schemes of Mice and Men oft go awry”. This could be considered the motto of most educational innovation. The question that arises is not so much why some innovations fail (although this is very important question), but rather why other innovations succeed? This study investigated the success factors of large-scale educational innovation projects in Dutch higher education. The research team attempted to identify success factors that might be relevant to educational innovation projects. The research design was largely qualitative, with a guided interview as the primary means of data collection, followed by data analysis and a correlation of findings with the success factors identified in the literature review. In order to pursue the research goal, a literature review of success factors was first conducted to identify existing knowledge in this area, followed by a detailed study of the educational innovation projects that have been funded by SURF Education. To obtain a list of potential success factors, existing project documentation and evaluations were reviewed and the project chairs and other important players were interviewed. Reports and evaluations by the projects themselves were reviewed to extract commonalities and differences in the factors that the projects felt were influential in their success of educational innovation. In the next phase of the project experts in the field of project management, project chairs of successful projects and evaluators/raters of projects will be asked to pinpoint factors of importance that were facilitative or detrimental to the outcome of their projects and implementation of the innovations. After completing the interviews all potential success factors will be recorded and clustered using an affinity technique. The clusters will then be labeled and clustered, creating a hierarchy of potential success factors. The project chairs will finally be asked to select the five most important success factors out of the hierarchy, and to rank their importance. This technique – the Experts’ Concept Mapping Method – is based upon Trochim’s concept mapping approach (1989a, 1989b) and was developed and perfected by Stoyanov and Kirschner (2004). Finally, the results will lead to a number of instruments as well as a functional procedure for tendering, selecting and monitoring innovative educational projects. The identification of success factors for educational innovation projects and measuring performance of projects based upon these factors are important as they can aid the development and implementation of innovation projects by explicating and making visible (and thus manageable) those success and failure factors relating to educational innovation projects in higher education. Determinants for Failure and Success of Innovation Projects: The Road to Sustainable Educational Innovation The Dutch Government has invested heavily in stimulating better and more creative use of information and communication technologies (ICT) in all forms of education. The ultimate goal of this investment is to ensure that students and teachers are equipped with the skills and knowledge required for success in the new knowledge-based economy. All stakeholders (i.e., government, industry, educational institutions, society in general) have placed high priority on achieving this goal. However, these highly funded projects have often resulted in either short-lived or local successes or outright failures (see De Bie,",
"title": ""
},
{
"docid": "071c6e558a0991da4201ae0d966ec391",
"text": "This work presents a scalable solution to open-vocabulary visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9% as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4% or 92.9% WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on other lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8% and 76.8% WER respectively.",
"title": ""
},
{
"docid": "48b25af0d0e0bed6315b0dcf4e6573b3",
"text": "Datasets published in the LOD cloud are recommended to follow some best practice in order to be 4-5 stars Linked Data compliant. They can often be consumed and accessed by different means such as API access, bulk download or as linked data fragments, but most of the time, a SPARQL endpoint is also provided. While the LOD cloud keeps growing, having a quick glimpse of those datasets is getting harder and there is a need to develop new methods enabling to detect automatically what an arbitrary dataset is about and to recommend visualizations for data samples. We consider that “a visualization is worth a million triples”, and in this paper, we propose a novel approach that mines the content of datasets and automatically generates visualizations. Our approach is directly based on the usage of SPARQL queries that will detect the important categories of a dataset and that will specifically consider the properties used by the objects which have been interlinked via owl:sameAs links. We then propose to associate type of visualization for those categories. We have implemented this approach into a so-called Linked Data Vizualization Wizard (LDVizWiz).",
"title": ""
},
{
"docid": "a2a4908ab05abc1fe62c149d0012c031",
"text": "Model compression is significant for wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and in business clusters requiring quick responses to large-scale service requests. In this work, we focus on reducing the sizes of basic structures (including input updates, gates, hidden states, cell states and outputs) within Long Short-Term Memory (LSTM) units, so as to learn structurally-sparse LSTMs. Independently reducing the sizes of those basic structures can result in unmatched dimensions among them, and consequently, end up with invalid LSTM units. To overcome this, we propose Intrinsic Sparse Structures (ISS) in LSTMs. By reducing one component of ISS, the sizes of those basic structures are simultaneously reduced by one such that the consistency of dimensions is maintained. By learning ISS within LSTM units, the eventual LSTMs are still regular LSTMs but have much smaller sizes of basic structures. Our method achieves 10.59× speedup in state-of-the-art LSTMs, without losing any perplexity of language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our source code is public available1.",
"title": ""
},
{
"docid": "29d43e9ec2afa314c4a00f26ce816e7e",
"text": "The aim of this paper is to discuss about various feature selection algorithms applied on different datasets to select the relevant features to classify data into binary and multi class in order to improve the accuracy of the classifier. Recent researches in medical diagnose uses the different kind of classification algorithms to diagnose the disease. For predicting the disease, the classification algorithm produces the result as binary class. When there is a multiclass dataset, the classification algorithm reduces the dataset into a binary class for simplification purpose by using any one of the data reduction methods and the algorithm is applied for prediction. When data reduction on original dataset is carried out, the quality of the data may degrade and the accuracy of an algorithm will get affected. To maintain the effectiveness of the data, the multiclass data must be treated with its original form without maximum reduction, and the algorithm can be applied on the dataset for producing maximum accuracy. Dataset with maximum number of attributes like thousands must incorporate the best feature selection algorithm for selecting the relevant features to reduce the space and time complexity. The performance of Classification algorithm is estimated by how accurately it predicts the individual class on particular dataset. The accuracy constrain mainly depends on the selection of appropriate features from the original dataset. The feature selection algorithms play an important role in classification for better performance. The feature selection is one of",
"title": ""
},
{
"docid": "4e7443088eedf5e6199959a06ebc420c",
"text": "The development of computational-intelligence based strategies for electronic markets has been the focus of intense research. In order to be able to design efficient and effective automated trading strategies, one first needs to understand the workings of the market, the strategies that traders use and their interactions as well as the patterns emerging as a result of these interactions. In this paper, we develop an agent-based model of the FX market which is the market for the buying and selling of currencies. Our agent-based model of the FX market (ABFXM) comprises heterogeneous trading agents which employ a strategy that identifies and responds to periodic patterns in the price time series. We use the ABFXM to undertake a systematic exploration of its constituent elements and their impact on the stylized facts (statistical patterns) of transactions data. This enables us to identify a set of sufficient conditions which result in the emergence of the stylized facts similarly to the real market data, and formulate a model which closely approximates the stylized facts. We use a unique high frequency dataset of historical transactions data which enables us to run multiple simulation runs and validate our approach and draw comparisons and conclusions for each market setting.",
"title": ""
},
{
"docid": "879af50edd27c74bde5b656d0421059a",
"text": "In this thesis we present an approach to adapt the Single Shot multibox Detector (SSD) for face detection. Our experiments are performed on the WIDER dataset which contains a large amount of small faces (faces of 50 pixels or less). The results show that the SSD method performs poorly on the small/hard subset of this dataset. We analyze the influence of increasing the resolution during inference and training time. Building on this analysis we present two additions to the SSD method. The first addition is changing the SSD architecture to an image pyramid architecture. The second addition is creating a selection criteria on each of the different branches of the image pyramid architecture. The results show that increasing the resolution, even during inference, increases the performance for the small/hard subset. By combining resolutions in an image pyramid structure we observe that the performance keeps consistent across different sizes of faces. Finally, the results show that adding a selection criteria on each branch of the image pyramid further increases performance, because the selection criteria negates the competing behaviour of the image pyramid. We conclude that our approach not only increases performance on the small/hard subset of the WIDER dataset but keeps on performing well on the large subset.",
"title": ""
},
{
"docid": "91cd1546f366726a32038b5f78ae1d16",
"text": "ns c is LBNL’s Network Simulator [20]. The simulator is written in C++; it uses OTcl a s command and configuration interface.nsv2 has three substantial changes from nsv1: (1) the more complex objects in nsv1 have been decomposed into simpler components for greater flexibility and composabili ty; (2) the configuration interface is now OTcl, an object ori ented version of Tcl; and (3) the interface code to the OTcl interpr te is separate from the main simulator. Ns documentation is available in html, Postscript, and PDF f ormats. Seehttp://www.isi.edu/nsnam/ns/ns-documentation. html for pointers to these.",
"title": ""
},
{
"docid": "6fbf1dff8df2c97f44e236a9c7ffac2a",
"text": "The generation of multimode orbital angular momentum (OAM) carrying beams has attracted more and more attention. A broadband dual-polarized dual-OAM-mode uniform circular array is proposed in this letter. The proposed antenna array, which consists of a broadband dual-polarized bow-tie dipole array and a broadband phase-shifting feeding network, can be used to generate OAM mode −1 and OAM mode 1 beams from 2.1 to 2.7 GHz (a bandwidth of 25%) for each of two polarizations. Four orthogonal channels can be provided by the proposed antenna array. A 2.5-m broadband OAM link is built. The measured crosstalk between the mode matched channels and the mode mismatched channels is less than −12 dB at 2.1, 2.4, and 2.7 GHz. Four different data streams are transmitted simultaneously by the proposed array with a bit error rate less than 4.2×10-3 at 2.1, 2.4, and 2.7 GHz.",
"title": ""
},
{
"docid": "a9b0d197e41fc328502c71c0ddf7b91e",
"text": "We propose a new full-rate space-time block code (STBC) for two transmit antennas which can be designed to achieve maximum diversity or maximum capacity while enjoying optimized coding gain and reduced-complexity maximum-likelihood (ML) decoding. The maximum transmit diversity (MTD) construction provides a diversity order of 2Nr for any number of receive antennas Nr at the cost of channel capacity loss. The maximum channel capacity (MCC) construction preserves the mutual information between the transmit and the received vectors while sacrificing diversity. The system designer can switch between the two constructions through a simple parameter change based on the operating signal-to-noise ratio (SNR), signal constellation size and number of receive antennas. Thanks to their special algebraic structure, both constructions enjoy low-complexity ML decoding proportional to the square of the signal constellation size making them attractive alternatives to existing full-diversity full-rate STBCs in [6], [3] which have high ML decoding complexity proportional to the fourth order of the signal constellation size. Furthermore, we design a differential transmission scheme for our proposed STBC, derive the exact ML differential decoding rule, and compare its performance with competitive schemes. Finally, we investigate transceiver design and performance of our proposed STBC in spatial multiple-access scenarios and over frequency-selective channels.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
},
{
"docid": "b4529985e1fa4e156900c9825fc1c6f9",
"text": "This paper presents the SWaT testbed, a modern industrial control system (ICS) for security research and training. SWaT is currently in use to (a) understand the impact of cyber and physical attacks on a water treatment system, (b) assess the effectiveness of attack detection algorithms, (c) assess the effectiveness of defense mechanisms when the system is under attack, and (d) understand the cascading effects of failures in one ICS on another dependent ICS. SWaT consists of a 6-stage water treatment process, each stage is autonomously controlled by a local PLC. The local fieldbus communications between sensors, actuators, and PLCs is realized through alternative wired and wireless channels. While the experience with the testbed indicates its value in conducting research in an active and realistic environment, it also points to design limitations that make it difficult for system identification and attack detection in some experiments.",
"title": ""
}
] |
scidocsrr
|
524c817ec1f456df3dcb2d52a17995c9
|
Predicting online e-marketplace sales performances: A big data approach
|
[
{
"docid": "66ad4513ed36329c299792ce35b2b299",
"text": "Reducing social uncertainty—understanding, predicting, and controlling the behavior of other people—is a central motivating force of human behavior. When rules and customs are not su4cient, people rely on trust and familiarity as primary mechanisms to reduce social uncertainty. The relative paucity of regulations and customs on the Internet makes consumer familiarity and trust especially important in the case of e-Commerce. Yet the lack of an interpersonal exchange and the one-time nature of the typical business transaction on the Internet make this kind of consumer trust unique, because trust relates to other people and is nourished through interactions with them. This study validates a four-dimensional scale of trust in the context of e-Products and revalidates it in the context of e-Services. The study then shows the in:uence of social presence on these dimensions of this trust, especially benevolence, and its ultimate contribution to online purchase intentions. ? 2004 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "721a64c9a5523ba836318edcdb8de021",
"text": "Highly-produced audio stories often include musical scores that reflect the emotions of the speech. Yet, creating effective musical scores requires deep expertise in sound production and is time-consuming even for experts. We present a system and algorithm for re-sequencing music tracks to generate emotionally relevant music scores for audio stories. The user provides a speech track and music tracks and our system gathers emotion labels on the speech through hand-labeling, crowdsourcing, and automatic methods. We develop a constraint-based dynamic programming algorithm that uses these emotion labels to generate emotionally relevant musical scores. We demonstrate the effectiveness of our algorithm by generating 20 musical scores for audio stories and showing that crowd workers rank their overall quality significantly higher than stories without music.",
"title": ""
},
{
"docid": "140a9255e8ee104552724827035ee10a",
"text": "Our goal is to design architectures that retain the groundbreaking performance of CNNs for landmark localization and at the same time are lightweight, compact and suitable for applications with limited computational resources. To this end, we make the following contributions: (a) we are the first to study the effect of neural network binarization on localization tasks, namely human pose estimation and face alignment. We exhaustively evaluate various design choices, identify performance bottlenecks, and more importantly propose multiple orthogonal ways to boost performance. (b) Based on our analysis, we propose a novel hierarchical, parallel and multi-scale residual architecture that yields large performance improvement over the standard bottleneck block while having the same number of parameters, thus bridging the gap between the original network and its binarized counterpart. (c) We perform a large number of ablation studies that shed light on the properties and the performance of the proposed block. (d) We present results for experiments on the most challenging datasets for human pose estimation and face alignment, reporting in many cases state-of-the-art performance. Code can be downloaded from https://www.adrianbulat.com/binary-cnn-landmarks",
"title": ""
},
{
"docid": "9c4c13c38e2b96aa3141b1300ca356c6",
"text": "Awareness plays a major role in human cognition and adaptive behaviour, though mechanisms involved remain unknown. Awareness is not an objectively established fact, therefore, despite extensive research, scientists have not been able to fully interpret its contribution in multisensory integration and precise neural firing, hence, questions remain: (1) How the biological neuron integrates the incoming multisensory signals with respect to different situations? (2) How are the roles of incoming multisensory signals defined (selective amplification or attenuation) that help neuron(s) to originate a precise neural firing complying with the anticipated behavioural-constraint of the environment? (3) How are the external environment and anticipated behaviour integrated? Recently, scientists have exploited deep learning architectures to integrate multimodal cues and capture context-dependent meanings. Yet, these methods suffer from imprecise behavioural representation and a limited understanding of neural circuitry or underlying information processing mechanisms with respect to the outside world. In this research, we introduce a new theory on the role of awareness and universal context that can help answering the aforementioned crucial neuroscience questions. Specifically, we propose a class of spiking conscious neuron in which the output depends on three functionally distinctive integrated input variables: receptive field (RF), local contextual field (LCF), and universal contextual field (UCF) a newly proposed dimension. The RF defines the incoming ambiguous sensory signal, LCF defines the modulatory sensory signal coming from other parts of the brain, and UCF defines the awareness. It is believed that the conscious neuron inherently contains enough knowledge about the situation in which the problem is to be solved based on past learning and reasoning and it defines the precise role of incoming multisensory signals (amplification or attenuation) to originate a precise neural firing (exhibiting switch-like behaviour). It is shown, when implemented within an SCNN, the conscious neuron helps modelling a more precise human behaviour e.g., when exploited to model human audiovisual speech processing, the SCNN performed comparably to deep long-short-term memory (LSTM) network. We believe that the proposed theory could be applied to address a range of real-world problems including elusive neural disruptions, explainable artificial intelligence, human-like computing, low-power neuromorphic chips etc.",
"title": ""
},
{
"docid": "d3eeb9e96881dc3bd60433bdf3e89749",
"text": "The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. M. Bushnell, V.D. Agrawal Essentials of Electronic Testing for Digital, Memory and MixedSignal VLSI Circuits",
"title": ""
},
{
"docid": "486e15d89ea8d0f6da3b5133c9811ee1",
"text": "Frequency-modulated continuous wave radar systems suffer from permanent leakage of the transmit signal into the receive path. Besides leakage within the radar device itself, an unwanted object placed in front of the antennas causes so-called short-range (SR) leakage. In an automotive application, for instance, it originates from signal reflections of the car’s own bumper. Particularly the residual phase noise of the downconverted SR leakage signal causes a severe degradation of the achievable sensitivity. In an earlier work, we proposed an SR leakage cancellation concept that is feasible for integration in a monolithic microwave integrated circuit. In this brief, we present a hardware prototype that holistically proves our concept with discrete components. The fundamental theory and properties of the concept are proven with measurements. Further, we propose a digital design for real-time operation of the cancellation algorithm on a field programmable gate array. Ultimately, by employing measurements with a bumper mounted in front of the antennas, we show that the leakage canceller significantly improves the sensitivity of the radar.",
"title": ""
},
{
"docid": "226bdf9c36a13900cf11f37bef816f04",
"text": "We describe a new class of subsampling techniques for CNNs, termed multisampling, that significantly increases the amount of information kept by feature maps through subsampling layers. One version of our method, which we call checkered subsampling, significantly improves the accuracy of state-of-the-art architectures such as DenseNet and ResNet without any additional parameters and, remarkably, improves the accuracy of certain pretrained ImageNet models without any training or fine-tuning. We glean new insight into the nature of data augmentations and demonstrate, for the first time, that coarse feature maps are significantly bottlenecking the performance of neural networks in image classification.",
"title": ""
},
{
"docid": "ecd4dd9d8807df6c8194f7b4c7897572",
"text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.",
"title": ""
},
{
"docid": "fef24d203d0a2e5d52aa887a0a442cf3",
"text": "The property that has given humans a dominant advantage over other species is not strength or speed, but intelligence. If progress in artificial intelligence continues unabated, AI systems will eventually exceed humans in general reasoning ability. A system that is “superintelligent” in the sense of being “smarter than the best human brains in practically every field” could have an enormous impact upon humanity (Bostrom 2014). Just as human intelligence has allowed us to develop tools and strategies for controlling our environment, a superintelligent system would likely be capable of developing its own tools and strategies for exerting control (Muehlhauser and Salamon 2012). In light of this potential, it is essential to use caution when developing AI systems that can exceed human levels of general intelligence, or that can facilitate the creation of such systems.",
"title": ""
},
{
"docid": "75ef3706a44edf1a96bcb0ce79b07761",
"text": "Bag-of-words (BOW), which represents an image by the histogram of local patches on the basis of a visual vocabulary, has attracted intensive attention in visual categorization due to its good performance and flexibility. Conventional BOW neglects the contextual relations between local patches due to its Naïve Bayesian assumption. However, it is well known that contextual relations play an important role for human beings to recognize visual categories from their local appearance. This paper proposes a novel contextual bag-of-words (CBOW) representation to model two kinds of typical contextual relations between local patches, i.e., a semantic conceptual relation and a spatial neighboring relation. To model the semantic conceptual relation, visual words are grouped on multiple semantic levels according to the similarity of class distribution induced by them, accordingly local patches are encoded and images are represented. To explore the spatial neighboring relation, an automatic term extraction technique is adopted to measure the confidence that neighboring visual words are relevant. Word groups with high relevance are used and their statistics are incorporated into the BOW representation. Classification is taken using the support vector machine with an efficient kernel to incorporate the relational information. The proposed approach is extensively evaluated on two kinds of visual categorization tasks, i.e., video event and scene categorization. Experimental results demonstrate the importance of contextual relations of local patches and the CBOW shows superior performance to conventional BOW.",
"title": ""
},
{
"docid": "22b52198123909ff7b9a7d296eb88f7e",
"text": "This paper addresses the problem of outdoor terrain modeling for the purposes of mobile robot navigation. We propose an approach in which a robot acquires a set of terrain models at differing resolutions. Our approach addresses one of the major shortcomings of Bayesian reasoning when applied to terrain modeling, namely artifacts that arise from the limited spatial resolution of robot perception. Limited spatial resolution causes small obstacles to be detectable only at close range. Hence, a Bayes filter estimating the state of terrain segments must consider the ranges at which that terrain is observed. We develop a multi-resolution approach that maintains multiple navigation maps, and derive rational arguments for the number of layers and their resolutions. We show that our approach yields significantly better results in a practical robot system, capable of acquiring detailed 3-D maps in large-scale outdoor environments.",
"title": ""
},
{
"docid": "bbb06abacfd8f4eb01fac6b11a4447bf",
"text": "In this paper, we present a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm following an inertial assisted Kalman Filter and reusing the estimated 3D map. By leveraging an inertial assisted Kalman Filter, we achieve an efficient motion tracking bearing fast dynamic movement in the front-end. To enable place recognition and reduce the trajectory estimation drift, we construct a factor graph based non-linear optimization in the back-end. We carefully design a feedback mechanism to balance the front/back ends ensuring the estimation accuracy. We also propose a novel initialization method that accurately estimate the scale factor, the gravity, the velocity, and gyroscope and accelerometer biases in a very robust way. We evaluated the algorithm on a public dataset, when compared to other state-of-the-art monocular Visual-Inertial SLAM approaches, our algorithm achieves better accuracy and robustness in an efficient way. By the way, we also evaluate our algorithm in a MonocularInertial setup with a low cost IMU to achieve a robust and lowdrift realtime SLAM system.",
"title": ""
},
{
"docid": "27bff398452f746a643bd3f4fcff2949",
"text": "Spectrum management is a crucial task in wireless networks. The research in cognitive radio networks by applying Markov is highly significant suitable model for spectrum management. This research work is the simulation study of variants of basic Markov models with a specific application for channel allocation problem in cognitive radio networks by applying continuous Markov process. The Markov channel allocation model is designed and implemented in MATLAB environment, and simulation results are analyzed.",
"title": ""
},
{
"docid": "3afa9f84c76bdca939c0a3dc645b4cbf",
"text": "Recurrent neural networks are theoretically capable of learning complex temporal sequences, but training them through gradient-descent is too slow and unstable for practical use in reinforcement learning environments. Neuroevolution, the evolution of artificial neural networks using genetic algorithms, can potentially solve real-world reinforcement learning tasks that require deep use of memory, i.e. memory spanning hundreds or thousands of inputs, by searching the space of recurrent neural networks directly. In this paper, we introduce a new neuroevolution algorithm called Hierarchical Enforced SubPopulations that simultaneously evolves networks at two levels of granularity: full networks and network components or neurons. We demonstrate the method in two POMDP tasks that involve temporal dependencies of up to thousands of time-steps, and show that it is faster and simpler than the current best conventional reinforcement learning system on these tasks.",
"title": ""
},
{
"docid": "95b825ee3290572189ba8d6957b6a307",
"text": "This paper proposes a working definition of the term gamification as the use of game design elements in non-game contexts. This definition is related to similar concepts such as serious games, serious gaming, playful interaction, and game-based technologies. Origins Gamification as a term originated in the digital media industry. The first documented uses dates back to 2008, but gamification only entered widespread adoption in the second half of 2010, when several industry players and conferences popularized it. It is also—still—a heavily contested term; even its entry into Wikipedia has been contested. Within the video game and digital media industry, discontent with some interpretations have already led designers to coin different terms for their own practice (e.g., gameful design) to distance themselves from recent negative connotations [13]. Until now, there has been hardly any academic attempt at a definition of gamification. Current uses of the word seem to fluctuate between two major ideas. The first is the increasing societal adoption and institutionalization of video games and the influence games and game elements have in shaping our everyday life and interactions. Game designer Jesse Schell summarized this as the trend towards a Gamepocalypse, \" when Copyright is held by the author/owner(s).",
"title": ""
},
{
"docid": "4f186e992cd7d5eadb2c34c0f26f4416",
"text": "a r t i c l e i n f o Mobile devices, namely phones and tablets, have long gone \" smart \". Their growing use is both a cause and an effect of their technological advancement. Among the others, their increasing ability to store and exchange sensitive information, has caused interest in exploiting their vulnerabilities, and the opposite need to protect users and their data through secure protocols for access and identification on mobile platforms. Face and iris recognition are especially attractive, since they are sufficiently reliable, and just require the webcam normally equipping the involved devices. On the contrary, the alternative use of fingerprints requires a dedicated sensor. Moreover, some kinds of biometrics lend themselves to uses that go beyond security. Ambient intelligence services bound to the recognition of a user, as well as social applications, such as automatic photo tagging on social networks, can especially exploit face recognition. This paper describes FIRME (Face and Iris Recognition for Mobile Engagement) as a biometric application based on a multimodal recognition of face and iris, which is designed to be embedded in mobile devices. Both design and implementation of FIRME rely on a modular architecture , whose workflow includes separate and replaceable packages. The starting one handles image acquisition. From this point, different branches perform detection, segmentation, feature extraction, and matching for face and iris separately. As for face, an antispoofing step is also performed after segmentation. Finally, results from the two branches are fused. In order to address also security-critical applications, FIRME can perform continuous reidentification and best sample selection. To further address the possible limited resources of mobile devices, all algorithms are optimized to be low-demanding and computation-light. The term \" mobile \" referred to capture equipment for different kinds of signals, e.g. images, has been long used in many cases where field activities required special portability and flexibility. As an example we can mention mobile biometric identification devices used by the U.S. army for different kinds of security tasks. Due to the critical task involving them, such devices have to offer remarkable quality, in terms of resolution and quality of the acquired data. Notwithstanding this formerly consolidated reference for the term mobile, nowadays, it is most often referred to modern phones, tablets and similar smart devices, for which new and engaging applications are designed. For this reason, from now on, the term mobile will refer only to …",
"title": ""
},
{
"docid": "046f6c5cc6065c1cb219095fb0dfc06f",
"text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.",
"title": ""
},
{
"docid": "8b2d6ce5158c94f2e21ff4ebd54af2b5",
"text": "Chambers and Jurafsky (2009) demonstrated that event schemas can be automatically induced from text corpora. However, our analysis of their schemas identifies several weaknesses, e.g., some schemas lack a common topic and distinct roles are incorrectly mixed into a single actor. It is due in part to their pair-wise representation that treats subjectverb independently from verb-object. This often leads to subject-verb-object triples that are not meaningful in the real-world. We present a novel approach to inducing open-domain event schemas that overcomes these limitations. Our approach uses cooccurrence statistics of semantically typed relational triples, which we call Rel-grams (relational n-grams). In a human evaluation, our schemas outperform Chambers’s schemas by wide margins on several evaluation criteria. Both Rel-grams and event schemas are freely available to the research community.",
"title": ""
},
{
"docid": "7fc35d2bb27fb35b5585aad8601a0cbd",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "f5ac213265b9ac8674af92fb2541cebd",
"text": "BACKGROUND\nCorneal oedema is a common post-operative problem that delays or prevents visual recovery from ocular surgery. Honey is a supersaturated solution of sugars with an acidic pH, high osmolarity and low water content. These characteristics inhibit the growth of micro-organisms, reduce oedema and promote epithelialisation. This clinical case series describes the use of a regulatory approved Leptospermum species honey ophthalmic product, in the management of post-operative corneal oedema and bullous keratopathy.\n\n\nMETHODS\nA retrospective review of 18 consecutive cases (30 eyes) with corneal oedema persisting beyond one month after single or multiple ocular surgical procedures (phacoemulsification cataract surgery and additional procedures) treated with Optimel Antibacterial Manuka Eye Drops twice to three times daily as an adjunctive therapy to conventional topical management with corticosteroid, aqueous suppressants, hypertonic sodium chloride five per cent, eyelid hygiene and artificial tears. Visual acuity and central corneal thickness were measured before and at the conclusion of Optimel treatment.\n\n\nRESULTS\nA temporary reduction in corneal epithelial oedema lasting up to several hours was observed after the initial Optimel instillation and was associated with a reduction in central corneal thickness, resolution of epithelial microcysts, collapse of epithelial bullae, improved corneal clarity, improved visualisation of the intraocular structures and improved visual acuity. Additionally, with chronic use, reduction in punctate epitheliopathy, reduction in central corneal thickness and improvement in visual acuity were achieved. Temporary stinging after Optimel instillation was experienced. No adverse infectious or inflammatory events occurred during treatment with Optimel.\n\n\nCONCLUSIONS\nOptimel was a safe and effective adjunctive therapeutic strategy in the management of persistent post-operative corneal oedema and warrants further investigation in clinical trials.",
"title": ""
},
{
"docid": "857d8003dff05b8e1ba5eeb8f6b3c14e",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] |
scidocsrr
|
9aec7682c9507086ab1022b9cec8ac9c
|
Pricing Digital Marketing : Information , Risk Sharing and Performance
|
[
{
"docid": "f7562e0540e65fdfdd5738d559b4aad1",
"text": "An important aspect of marketing practice is the targeting of consumer segments for differential promotional activity. The premise of this activity is that there exist distinct segments of homogeneous consumers who can be identified by readily available demographic information. The increased availability of individual consumer panel data open the possibility of direct targeting of individual households. The goal of this paper is to assess the information content of various information sets available for direct marketing purposes. Information on the consumer is obtained from the current and past purchase history as well as demographic characteristics. We consider the situation in which the marketer may have access to a reasonably long purchase history which includes both the products purchased and information on the causal environment. Short of this complete purchase history, we also consider more limited information sets which consist of only the current purchase occasion or only information on past product choice without causal variables. Proper evaluation of this information requires a flexible model of heterogeneity which can accommodate observable and unobservable heterogeneity as well as produce household level inferences for targeting purposes. We develop new econometric methods to imple0732-2399/96/1504/0321$01.25 Copyright C 1996, Institute for Operations Research and the Management Sciences ment a random coefficient choice model in which the heterogeneity distribution is related to observable demographics. We couple this approach to modeling heterogeneity with a target couponing problem in which coupons are customized to specific households on the basis of various information sets. The couponing problem allows us to place a monetary value on the information sets. Our results indicate there exists a tremendous potential for improving the profitability of direct marketing efforts by more fully utilizing household purchase histories. Even rather short purchase histories can produce a net gain in revenue from target couponing which is 2.5 times the gain from blanket couponing. The most popular current electronic couponing trigger strategy uses only one observation to customize the delivery of coupons. Surprisingly, even the information contained in observing one purchase occasion boasts net couponing revenue by 50% more than that which would be gained by the blanket strategy. This result, coupled with increased competitive pressures, will force targeted marketing strategies to become much more prevalent in the future than they are today. (Target Marketing; Coupons; Heterogeneity; Bayesian Hierarchical Models) MARKETING SCIENCE/Vol. 15, No. 4, 1996 pp. 321-340 THE VALUE OF PURCHASE HISTORY DATA IN TARGET MARKETING",
"title": ""
}
] |
[
{
"docid": "dc67945b32b2810a474acded3c144f68",
"text": "This paper presents an overview of the eld of Intelligent Products. As Intelligent Products have many facets, this paper is mainly focused on the concept behind Intelligent Products, the technical foundations, and the achievable practical goals of Intelligent Products. A novel classi cation of Intelligent Products is introduced, which distinguishes between three orthogonal dimensions. Furthermore, the technical foundations in the areas of automatic identi cation and embedded processing, distributed information storage and processing, and agent-based systems are discussed, as well as the achievable practical goals in the contexts of manufacturing, supply chains, asset management, and product life cycle management.",
"title": ""
},
{
"docid": "4d7c0222317fbd866113e1a244a342f3",
"text": "A simple method of \"tuning up\" a multiple-resonant-circuit filter quickly and exactly is demonstrated. The method may be summarized as follows: Very loosely couple a detector to the first resonator of the filter; then, proceeding in consecutive order, tune all odd-numbered resonators for maximum detector output, and all even-numbered resonators for minimum detector output (always making sure that the resonator immediately following the one to be resonated is completely detuned). Also considered is the correct adjustment of the two other types of constants in a filter. Filter constants can always be reduced to only three fundamental types: f0, dr(1/Qr), and Kr(r+1). This is true whether a lumped-element 100-kc filter or a distributed-element 5,000-mc unit is being considered. dr is adjusted by considering the rth resonator as a single-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the 3-db-down-points to the required value. Kr(r+1) is adjusted by considering the rth and (r+1)th adjacent resonators as a double-tuned circuit (all other resonators completely detuned) and setting the bandwidth between the resulting response peaks to the required value. Finally, all the required values for K and Q are given for an n-resonant-circuit filter that will produce the response (Vp/V)2=1 +(Δf/Δf3db)2n.",
"title": ""
},
{
"docid": "7def66c81180a73282cd7e463dc4938c",
"text": "Drug abuse in Nigeria has been indicated to be on the rise in recent years. The use of hard drugs and misuse of prescription drugs for nonmedical purposes cuts across all strata, especially the youths. Tramadol (2[(Dimethylamin) methyl]-1-(3-methoxyphenyl)cyclohexanol) is known for its analgesic potentials. This potent opioid pain killer is misused by Nigerian youths, owing to its suspicion as sexual performance drug. This study therefore is aimed at determining the effect of tramadol on hormone levels its improved libido properties and possibly fertility. Twenty seven (27) European rabbits weighing 1.0 to 2.0 kg were used. Animals were divided into four major groups consisting of male and female control, and male and female tramadol treated groups. Treated groups were further divided into oral and intramuscular (IM) administered groups. Oral groups were administered 25 mg/kg b.w. of tramadol per day while the IM groups received 15 mg/kg b.w. per Original Research Article Osadolor and Omo-Erhabor; BJMMR, 14(8): 1-11, 2016; Article no.BJMMR.24620 2 day over a period of thirty days. Blood samples were collected at the end of the experiment for progesterone, testosterone, estrogen (E2), luteinizing hormone, follicle stimulating hormone (FSH), β-human chorionic gonadotropin and prolactin estimation. Tramadol treated groups were compared with control groups at the end of the study, as well as within group comparison was done. From the results, FSH was found to be significantly reduced (p<0.05) while LH increased significantly (p<0.05). A decrease was observed for testosterone (p<0.001), and estrogen, FSH, progesterone also decreased (p<0.05). Significant changes weren’t observed when IM groups were compared with oral groups. This study does not support an improvement of libido by tramadol, though its possible usefulness in the treatment of premature ejaculation may have been established, but its capabilities to induce male and female infertility is still in doubt.",
"title": ""
},
{
"docid": "95cd9d6572700e2b118c7cb0ffba549a",
"text": "Non-volatile main memory (NVRAM) has the potential to fundamentally change the persistency of software. Applications can make their state persistent by directly placing data structures on NVRAM instead of volatile DRAM. However, the persistent nature of NVRAM requires significant changes for memory allocators that are now faced with the additional tasks of data recovery and failure-atomicity. In this paper, we present nvm malloc, a general-purpose memory allocator concept for the NVRAM era as a basic building block for persistent applications. We introduce concepts for managing named allocations for simplified recovery and using volatile and non-volatile memory in combination to provide both high performance and failure-atomic allocations.",
"title": ""
},
{
"docid": "ed2c198cf34fe63d99a53dd5315bde53",
"text": "The article briefly elaborated the ship hull optimization research development of domestic and foreign based on CFD, proposed that realizing the key of ship hull optimization based on CFD is the hull form parametrization geometry modeling technology. On the foundation of the domestic and foreign hull form parametrization, we proposed the ship blending method, and clarified the principle, had developed the hull form parametrization blending module. Finally, we realized the integration of hull form parametrization blending module and CFD using the integrated optimization frame, has realized hull form automatic optimization design based on CFD, build the foundation for the research of ship multi-disciplinary optimization.",
"title": ""
},
{
"docid": "b25cfcd6ceefffe3039bb5a6a53e216c",
"text": "With the increasing applications in the domains of ubiquitous and context-aware computing, Internet of Things (IoT) are gaining importance. In IoTs, literally anything can be part of it, whether it is sensor nodes or dumb objects, so very diverse types of services can be produced. In this regard, resource management, service creation, service management, service discovery, data storage, and power management would require much better infrastructure and sophisticated mechanism. The amount of data IoTs are going to generate would not be possible for standalone power-constrained IoTs to handle. Cloud computing comes into play here. Integration of IoTs with cloud computing, termed as Cloud of Things (CoT) can help achieve the goals of envisioned IoT and future Internet. This IoT-Cloud computing integration is not straight-forward. It involves many challenges. One of those challenges is data trimming. Because unnecessary communication not only burdens the core network, but also the data center in the cloud. For this purpose, data can be preprocessed and trimmed before sending to the cloud. This can be done through a Smart Gateway, accompanied with a Smart Network or Fog Computing. In this paper, we have discussed this concept in detail and present the architecture of Smart Gateway with Fog Computing. We have tested this concept on the basis of Upload Delay, Synchronization Delay, Jitter, Bulk-data Upload Delay, and Bulk-data Synchronization Delay.",
"title": ""
},
{
"docid": "31865d8e75ee9ea0c9d8c575bbb3eb90",
"text": "Magicians use misdirection to prevent you from realizing the methods used to create a magical effect, thereby allowing you to experience an apparently impossible event. Magicians have acquired much knowledge about misdirection, and have suggested several taxonomies of misdirection. These describe many of the fundamental principles in misdirection, focusing on how misdirection is achieved by magicians. In this article we review the strengths and weaknesses of past taxonomies, and argue that a more natural way of making sense of misdirection is to focus on the perceptual and cognitive mechanisms involved. Our psychologically-based taxonomy has three basic categories, corresponding to the types of psychological mechanisms affected: perception, memory, and reasoning. Each of these categories is then divided into subcategories based on the mechanisms that control these effects. This new taxonomy can help organize magicians' knowledge of misdirection in a meaningful way, and facilitate the dialog between magicians and scientists.",
"title": ""
},
{
"docid": "d59c6a2dd4b6bf7229d71f3ae036328a",
"text": "Community search over large graphs is a fundamental problem in graph analysis. Recent studies propose to compute top-k influential communities, where each reported community not only is a cohesive subgraph but also has a high influence value. The existing approaches to the problem of top-k influential community search can be categorized as index-based algorithms and online search algorithms without indexes. The index-based algorithms, although being very efficient in conducting community searches, need to pre-compute a specialpurpose index and only work for one built-in vertex weight vector. In this paper, we investigate online search approaches and propose an instance-optimal algorithm LocalSearch whose time complexity is linearly proportional to the size of the smallest subgraph that a correct algorithm needs to access without indexes. In addition, we also propose techniques to make LocalSearch progressively compute and report the communities in decreasing influence value order such that k does not need to be specified. Moreover, we extend our framework to the general case of top-k influential community search regarding other cohesiveness measures. Extensive empirical studies on real graphs demonstrate that our algorithms outperform the existing online search algorithms by several orders of magnitude.",
"title": ""
},
{
"docid": "9e208e6beed62575a92f32031b7af8ad",
"text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.",
"title": ""
},
{
"docid": "b86711e8a418bde07e16bcb9a394d92c",
"text": "This paper reviews and evaluates the evidence for the existence of distinct varieties of developmental dyslexia, analogous to those found in the acquired dyslexic population. Models of the normal adult reading process and of the development of reading in children are used to provide a framework for considering the issues. Data from a large-sample study of the reading patterns of developmental dyslexics are then reported. The lexical and sublexical reading skills of 56 developmental dyslexics were assessed through close comparison with the skills of 56 normally developing readers. The results indicate that there are at least two varieties of developmental dyslexia, the first of which is characterised by a specific difficulty using the lexical procedure, and the second by a difficulty using the sublexical procedure. These subtypes are apparently not rare, but are relatively prevalent in the developmental dyslexic population. The results of a second experiment, which suggest that neither of these reading patterns can be accounted for in terms of a general language disorder, are then reported.",
"title": ""
},
{
"docid": "93afb696fa395a7f7c2a4f3fc2ac690d",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "ac168ff92c464cb90a9a4ca0eb5bfa5c",
"text": "Path computing is a new paradigm that generalizes the edge computing vision into a multi-tier cloud architecture deployed over the geographic span of the network. Path computing supports scalable and localized processing by providing storage and computation along a succession of datacenters of increasing sizes, positioned between the client device and the traditional wide-area cloud data-center. CloudPath is a platform that implements the path computing paradigm. CloudPath consists of an execution environment that enables the dynamic installation of light-weight stateless event handlers, and a distributed eventual consistent storage system that replicates application data on-demand. CloudPath handlers are small, allowing them to be rapidly instantiated on demand on any server that runs the CloudPath execution framework. In turn, CloudPath automatically migrates application data across the multiple datacenter tiers to optimize access latency and reduce bandwidth consumption.",
"title": ""
},
{
"docid": "ccf1f3cb6a9efda6c7d6814ec01d8329",
"text": "Twitter as a micro-blogging platform rose to instant fame mainly due to its minimalist features that allow seamless communication between users. As the conversations grew thick and faster, a placeholder feature called as Hashtags became important as it captured the themes behind the tweets. Prior studies have investigated the conversation dynamics, interplay with other media platforms and communication patterns between users for specific event-based hashtags such as the #Occupy movement. Commonplace hashtags which are used on a daily basis have been largely ignored due to their seemingly innocuous presence in tweets and also due to the lack of connection with real-world events. However, it can be postulated that utility of these hashtags is the main reason behind their continued usage. This study is aimed at understanding the rationale behind the usage of a particular type of commonplace hashtags:-location hashtags such as country and city name hashtags. Tweets with the hashtag #singapore were extracted for a week’s duration. Manual and automatic tweet classification was performed along with social network analysis, to identify the underlying themes. Seven themes were identified. Findings indicate that the hashtag is prominent in tweets about local events, local news, users’ current location and landmark related information sharing. Users who share content from social media sites such as Instagram make use of the hashtag in a more prominent way when compared to users who post textual content. News agencies, commercial bodies and celebrities make use of the hashtag more than common individuals. Overall, the results show the non-conversational nature of the hashtag. The findings are to be validated with other country names and crossvalidated with hashtag data from other social media platforms.",
"title": ""
},
{
"docid": "7b5331b0e6ad693fc97f5f3b543bf00c",
"text": "Relational learning deals with data that are characterized by relational structures. An important task is collective classification, which is to jointly classify networked objects. While it holds a great promise to produce a better accuracy than noncollective classifiers, collective classification is computational challenging and has not leveraged on the recent breakthroughs of deep learning. We present Column Network (CLN), a novel deep learning model for collective classification in multirelational domains. CLN has many desirable theoretical properties: (i) it encodes multi-relations between any two instances; (ii) it is deep and compact, allowing complex functions to be approximated at the network level with a small set of free parameters; (iii) local and relational features are learned simultaneously; (iv) longrange, higher-order dependencies between instances are supported naturally; and (v) crucially, learning and inference are efficient, linear in the size of the network and the number of relations. We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) PubMed Diabetes publication classification and (c) film genre classification. In all applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.",
"title": ""
},
{
"docid": "418e29af01be9655c06df63918f41092",
"text": "A major goal of unsupervised learning is to discover data representations that are useful for subsequent tasks, without access to supervised labels during training. Typically, this goal is approached by minimizing a surrogate objective, such as the negative log likelihood of a generative model, with the hope that representations useful for subsequent tasks will arise as a side effect. In this work, we propose instead to directly target a later desired task by meta-learning an unsupervised learning rule, which leads to representations useful for that task. Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm – an unsupervised weight update rule – that produces representations that perform well under this meta-objective. Additionally, we constrain our unsupervised update rule to a be a biologically-motivated, neuron-local function, which enables it to generalize to novel neural network architectures. We show that the meta-learned update rule produces useful features and sometimes outperforms existing unsupervised learning techniques. We show that the metalearned unsupervised update rule generalizes to train networks with different widths, depths, and nonlinearities. It also generalizes to train on data with randomly permuted input dimensions and even generalizes from image datasets to a text task.",
"title": ""
},
{
"docid": "011332e3d331d461e786fd2827b0434d",
"text": "In this manuscript we present various robust statistical methods popular in the social sciences, and show how to apply them in R using the WRS2 package available on CRAN. We elaborate on robust location measures, and present robust t-test and ANOVA versions for independent and dependent samples, including quantile ANOVA. Furthermore, we present on running interval smoothers as used in robust ANCOVA, strategies for comparing discrete distributions, robust correlation measures and tests, and robust mediator models.",
"title": ""
},
{
"docid": "c5fc804aa7f98a575a0e15b7c28650e8",
"text": "In the past few years, a great attention has been received by web documents as a new source of individual opinions and experience. This situation is producing increasing interest in methods for automatically extracting and analyzing individual opinion from web documents such as customer reviews, weblogs and comments on news. This increase was due to the easy accessibility of documents on the web, as well as the fact that all these were already machine-readable on gaining. At the same time, Machine Learning methods in Natural Language Processing (NLP) and Information Retrieval were considerably increased development of practical methods, making these widely available corpora. Recently, many researchers have focused on this area. They are trying to fetch opinion information and analyze it automatically with computers. This new research domain is usually called Opinion Mining and Sentiment Analysis. . Until now, researchers have developed several techniques to the solution of the problem. This paper try to cover some techniques and approaches that be used in this area.",
"title": ""
},
{
"docid": "789de6123795ad8950c21b0ee8df7315",
"text": "This paper introduces new optimality-preserving operators on Q-functions. We first describe an operator for tabular representations, the consistent Bellman operator, which incorporates a notion of local policy consistency. We show that this local consistency leads to an increase in the action gap at each state; increasing this gap, we argue, mitigates the undesirable effects of approximation and estimation errors on the induced greedy policies. This operator can also be applied to discretized continuous space and time problems, and we provide empirical results evidencing superior performance in this context. Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator. As corollaries we provide a proof of optimality for Baird’s advantage learning algorithm and derive other gap-increasing operators with interesting properties. We conclude with an empirical study on 60 Atari 2600 games illustrating the strong potential of these new operators. Value-based reinforcement learning is an attractive solution to planning problems in environments with unknown, unstructured dynamics. In its canonical form, value-based reinforcement learning produces successive refinements of an initial value function through repeated application of a convergent operator. In particular, value iteration (Bellman 1957) directly computes the value function through the iterated evaluation of Bellman’s equation, either exactly or from samples (e.g. Q-Learning, Watkins 1989). In its simplest form, value iteration begins with an initial value function V0 and successively computes Vk+1 := T Vk, where T is the Bellman operator. When the environment dynamics are unknown, Vk is typically replaced by Qk, the state-action value function, and T is approximated by an empirical Bellman operator. The fixed point of the Bellman operator, Q∗, is the optimal state-action value function or optimal Q-function, from which an optimal policy π∗ can be recovered. In this paper we argue that the optimal Q-function is inconsistent, in the sense that for any action a which is subop∗Now at Carnegie Mellon University. Copyright c © 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. timal in state x, Bellman’s equation for Q∗(x, a) describes the value of a nonstationary policy: upon returning to x, this policy selects π∗(x) rather than a. While preserving global consistency appears impractical, we propose a simple modification to the Bellman operator which provides us a with a first-order solution to the inconsistency problem. Accordingly, we call our new operator the consistent Bellman operator. We show that the consistent Bellman operator generally devalues suboptimal actions but preserves the set of optimal policies. As a result, the action gap – the value difference between optimal and second best actions – increases. This increasing of the action gap is advantageous in the presence of approximation or estimation error, and may be crucial for systems operating at a fine time scale such as video games (Togelius et al. 2009; Bellemare et al. 2013), real-time markets (Jiang and Powell 2015), and robotic platforms (Riedmiller et al. 2009; Hoburg and Tedrake 2009; Deisenroth and Rasmussen 2011; Sutton et al. 2011). In fact, the idea of devaluating suboptimal actions underpins Baird’s advantage learning (Baird 1999), designed for continuous time control, and occurs naturally when considering the discretized solution of continuous time and space MDPs (e.g. Munos and Moore 1998; 2002), whose limit is the HamiltonJacobi-Bellman equation (Kushner and Dupuis 2001). Our empirical results on the bicycle domain (Randlov and Alstrom 1998) show a marked increase in performance from using the consistent Bellman operator. In the second half of this paper we derive novel sufficient conditions for an operator to preserve optimality. The relative weakness of these new conditions reveal that it is possible to deviate significantly from the Bellman operator without sacrificing optimality: an optimality-preserving operator needs not be contractive, nor even guarantee convergence of the Q-values for suboptimal actions. While numerous alternatives to the Bellman operator have been put forward (e.g. recently Azar et al. 2011; Bertsekas and Yu 2012), we believe our work to be the first to propose such a major departure from the canonical fixed-point condition required from an optimality-preserving operator. As proof of the richness of this new operator family we describe a few practical instantiations with unique properties. We use our operators to obtain state-of-the-art empirical results on the Arcade Learning Environment (Bellemare et al. 2013). We consider the Deep Q-Network (DQN) architecture of Mnih et al. (2015), replacing only its learning rule with one of our operators. Remarkably, this one-line change produces agents that significantly outperform the original DQN. Our work, we believe, demonstrates the potential impact of rethinking the core components of value-based reinforcement learning.",
"title": ""
}
] |
scidocsrr
|
dd7b9972551d6a8b7413d0ff7d4b45d2
|
Cross-Platform Emoji Interpretation: Analysis, a Solution, and Applications
|
[
{
"docid": "17a0dfece42274180e470f23e532880d",
"text": "Emoji provide a way to express nonverbal conversational cues in computer-mediated communication. However, people need to share the same understanding of what each emoji symbolises, otherwise communication can breakdown. We surveyed 436 people about their use of emoji and ran an interactive study using a two-dimensional emotion space to investigate (1) the variation in people's interpretation of emoji and (2) their interpretation of corresponding Android and iOS emoji. Our results show variations between people's ratings within and across platforms. We outline our solution to reduce misunderstandings that arise from different interpretations of emoji.",
"title": ""
},
{
"docid": "546af5877fcd3bbf8d1354701f1ead12",
"text": "Recent studies have found that people interpret emoji characters inconsistently, creating significant potential for miscommunication. However, this research examined emoji in isolation, without consideration of any surrounding text. Prior work has hypothesized that examining emoji in their natural textual contexts would substantially reduce the observed potential for miscommunication. To investigate this hypothesis, we carried out a controlled study with 2,482 participants who interpreted emoji both in isolation and in multiple textual contexts. After comparing the variability of emoji interpretation in each condition, we found that our results do not support the hypothesis in prior work: when emoji are interpreted in textual contexts, the potential for miscommunication appears to be roughly the same. We also identify directions for future research to better understand the interplay between emoji and textual context.",
"title": ""
},
{
"docid": "dadd12e17ce1772f48eaae29453bc610",
"text": "Publications Learning Word Vectors for Sentiment Analysis. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. The 49 th Annual Meeting of the Association for Computational Linguistics (ACL 2011). Spectral Chinese Restaurant Processes: Nonparametric Clustering Based on Similarities. Richard Socher, Andrew Maas, and Christopher D. Manning. The 15 th International Conference on Artificial Intelligence and Statistics (AISTATS 2010). A Probabilistic Model for Semantic Word Vectors. Andrew L. Maas and Andrew Y. Ng. NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning. One-Shot Learning with Bayesian Networks. Andrew L. Maas and Charles Kemp. Proceedings of the 31 st",
"title": ""
}
] |
[
{
"docid": "41e188c681516862a69fe8e90c58a618",
"text": "This paper explores the use of Information-Centric Networking (ICN) to support management operations in IoT deployments, presenting the design of a flexible architecture that allows the appropriate operation of IoT devices within a delimited ICN network domain. Our architecture has been designed with special consideration to naming, interoperation, security and energy-efficiency requirements. We theoretically assess the communication overhead introduced by the security procedures of our solution, both at IoT devices and clients. Additionally, we show the potential of our architecture to accommodate enhanced management applications, focusing on a specific use case, i.e. an information freshness service level agreement application. Finally, we present a proof-of-concept implementation of our architecture over an Arduino board, and we use it to carry out a set of experiments that validate the feasibility of our solution. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a6f6525af5a1d9306d6b62ebd821f4ba",
"text": "In this report, we introduce the outline of our system in Task 3: Disease Classification of ISIC 2018: Skin Lesion Analysis Towards Melanoma Detection. We fine-tuned multiple pre-trained neural network models based on Squeeze-and-Excitation Networks (SENet) which achieved state-of-the-art results in the field of image recognition. In addition, we used the mean teachers as a semi-supervised learning framework and introduced some specially designed data augmentation strategies for skin lesion analysis. We confirmed our data augmentation strategy improved classification performance and demonstrated 87.2% in balanced accuracy on the official ISIC2018 validation dataset.",
"title": ""
},
{
"docid": "348115a5dddbc2bcdcf5552b711e82c0",
"text": "Enterococci are Gram-positive, catalase-negative, non-spore-forming, facultative anaerobic bacteria, which usually inhabit the alimentary tract of humans in addition to being isolated from environmental and animal sources. They are able to survive a range of stresses and hostile environments, including those of extreme temperature (5-65 degrees C), pH (4.5-10.0) and high NaCl concentration, enabling them to colonize a wide range of niches. Virulence factors of enterococci include the extracellular protein Esp and aggregation substances (Agg), both of which aid in colonization of the host. The nosocomial pathogenicity of enterococci has emerged in recent years, as well as increasing resistance to glycopeptide antibiotics. Understanding the ecology, epidemiology and virulence of Enterococcus species is important for limiting urinary tract infections, hepatobiliary sepsis, endocarditis, surgical wound infection, bacteraemia and neonatal sepsis, and also stemming the further development of antibiotic resistance.",
"title": ""
},
{
"docid": "44c5dd0001a05106839b534431b48bc8",
"text": "The Internet and finance are accelerating into integration in the 21 Century. Internet Finance was firstly proposed by Ma Weihua, the former president of China Merchants Bank in July 2012. On the basis of 74 latest research literatures selected from CSSCI Journals, Chinese Core Journals, authoritative magazines and related newspapers, this paper summarizes the current domestic research progress and trend about Internet Finance according to three dimensions, such as the sources of journals, research subjects and research contents. This research shows that the current domestic researches are not only shallow and superficial, but also lack the theoretical analyses and model applications; and the wealth-based and bank-based Internet Finance will be the research focus in the future.",
"title": ""
},
{
"docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54",
"text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.",
"title": ""
},
{
"docid": "da4bac81f8544eb729c7e0aafe814927",
"text": "This work focuses on representing very high-dimensional global image descriptors using very compact 64-1024 bit binary hashes for instance retrieval. We propose DeepHash: a hashing scheme based on deep networks. Key to making DeepHash work at extremely low bitrates are three important considerations – regularization, depth and fine-tuning – each requiring solutions specific to the hashing problem. In-depth evaluation shows that our scheme consistently outperforms state-of-the-art methods across all data sets for both Fisher Vectors and Deep Convolutional Neural Network features, by up to 20% over other schemes. The retrieval performance with 256-bit hashes is close to that of the uncompressed floating point features – a remarkable 512× compression.",
"title": ""
},
{
"docid": "1298ddbeea84f6299e865708fd9549a6",
"text": "Since its invention in the early 1960s (Rotman and Turner, 1963), the Rotman Lens has proven itself to be a useful beamformer for designers of electronically scanned arrays. Inherent in its design is a true time delay phase shift capability that is independent of frequency and removes the need for costly phase shifters to steer a beam over wide angles. The Rotman Lens has a long history in military radar, but it has also been used in communication systems. This article uses the developed software to design and analyze a microstrip Rotman Lens for the Ku band. The initial lens design will come from a tool based on geometrical optics (GO). A second stage of analysis will be performed using a full wave finite difference time domain (FDTD) solver. The results between the first-cut design tool and the comprehensive FDTD solver will be compared, and some of the design trades will be explored to gauge their impact on the performance of the lens.",
"title": ""
},
{
"docid": "9c25a2e343e9e259a9881fd13983c150",
"text": "Advances in cognitive, affective, and social neuroscience raise a host of new questions concerning the ways in which neuroscience can and should be used. These advances also challenge our intuitions about the nature of humans as moral and spiritual beings. Neuroethics is the new field that grapples with these issues. The present article surveys a number of applications of neuroscience to such diverse arenas as marketing, criminal justice, the military, and worker productivity. The ethical, legal, and societal effects of these applications are discussed. Less practical, but perhaps ultimately more consequential, is the impact of neuroscience on our worldview and our understanding of the human person.",
"title": ""
},
{
"docid": "300bff5036b5b4e83a4bc605020b49e3",
"text": "Many theories of human cognition postulate that people are equipped with a repertoire of strategies to solve the tasks they face. This theoretical framework of a cognitive toolbox provides a plausible account of intra- and interindividual differences in human behavior. Unfortunately, it is often unclear how to rigorously test the toolbox framework. How can a toolbox model be quantitatively specified? How can the number of toolbox strategies be limited to prevent uncontrolled strategy sprawl? How can a toolbox model be formally tested against alternative theories? The authors show how these challenges can be met by using Bayesian inference techniques. By means of parameter recovery simulations and the analysis of empirical data across a variety of domains (i.e., judgment and decision making, children's cognitive development, function learning, and perceptual categorization), the authors illustrate how Bayesian inference techniques allow toolbox models to be quantitatively specified, strategy sprawl to be contained, and toolbox models to be rigorously tested against competing theories. The authors demonstrate that their approach applies at the individual level but can also be generalized to the group level with hierarchical Bayesian procedures. The suggested Bayesian inference techniques represent a theoretical and methodological advancement for toolbox theories of cognition and behavior.",
"title": ""
},
{
"docid": "33880207bb52ce7e20c6f5ad80d67a47",
"text": "This research involves the digital transformation of an orthopedic surgical practice office housing three community orthopedic surgeons and a physical therapy treatment clinic in Toronto, Ontario. All three surgeons engage in both a private community orthopaedic surgery practice and hold surgical privileges at a local community hospital which serves a catchment area of more than 850,000 people in the northwest Greater Toronto Area. The clinic employs two full time physical therapists and one office manager for therapy services as well as four administrative assistants who manage the surgeon’s practices.",
"title": ""
},
{
"docid": "4efa56d9c2c387608fe9ddfdafca0f9a",
"text": "Accurate cardinality estimates are essential for a successful query optimization. This is not only true for relational DBMSs but also for RDF stores. An RDF database consists of a set of triples and, hence, can be seen as a relational database with a single table with three attributes. This makes RDF rather special in that queries typically contain many self joins. We show that relational DBMSs are not well-prepared to perform cardinality estimation in this context. Further, there are hardly any special cardinality estimation methods for RDF databases. To overcome this lack of appropriate cardinality estimation methods, we introduce characteristic sets together with new cardinality estimation methods based upon them. We then show experimentally that the new methods are-in the RDF context-highly superior to the estimation methods employed by commercial DBMSs and by the open-source RDF store RDF-3X.",
"title": ""
},
{
"docid": "c61559bdb209cf7098bb11c372a483c6",
"text": "This paper presents a lexicon model for the description of verbs, nouns and adjectives to be used in applicatons like sentiment analysis and opinion mining. The model aims to describe the detailed subjectivity relations that exist between the actors in a sentence expressing separate attitudes for each actor. Subjectivity relations that exist between the different actors are labeled with information concerning both the identity of the attitude holder and the orientation (positive vs. negative) of the attitude. The model includes a categorization into semantic categories relevant to opinion mining and sentiment analysis and provides means for the identification of the attitude holder and the polarity of the attitude and for the description of the emotions and sentiments of the different actors involved in the text. Special attention is paid to the role of the speaker/writer of the text whose perspective is expressed and whose views on what is happening are conveyed in the text. Finally, validation is provided by an annotation study that shows that these subtle subjectivity relations are reliably identifiable by human annotators.",
"title": ""
},
{
"docid": "bcf7d85007ebcb6c009bbcbb704e8df4",
"text": "This paper describes the speech activity detection (SAD) system developed by the Patrol team for the first phase of the DARPA RATS (Robust Automatic Transcription of Speech) program, which seeks to advance state of the art detection capabilities on audio from highly degraded communication channels. We present two approaches to SAD, one based on Gaussian mixture models, and one based on multi-layer perceptrons. We show that significant gains in SAD accuracy can be obtained by careful design of acoustic front end, feature normalization, incorporation of long span features via data-driven dimensionality reducing transforms, and channel dependent modeling. We also present a novel technique for normalizing detection scores from different systems for the purpose of system combination.",
"title": ""
},
{
"docid": "d3875bf0d0bf1af7b7b8044b06152c46",
"text": "This two-part article series covers the design, development, and testing of a reprogrammable UAV autopilot system. Here you get a detailed system-level description of the autopilot design, with specific emphasis on its hardware and software. nmanned aerial vehicle (UAV) usage has increased tremendously in recent years. Although this growth has been fueled mainly by demand from government defense agencies, UAVs are now being used for non-military endeavors as well. Today, UAVs are employed for purposes ranging from wildlife tracking to forest fire monitoring. Advances in microelectronics technology have enabled engineers to automate such aircraft and convert them into useful remote-sensing platforms. For instance, due to sensor development in the automotive industry and elsewhere, the cost of the components required to build such systems has fallen greatly. In this two-part article series, we'll present the design, development, and flight test results for a reprogrammable UAV autopi-lot system. The design is primarily focused on supporting guidance, navigation, and control (GNC) research. It facilitates a fric-tionless transition from software simulation to hardware-in-the-loop (HIL) simulation to flight tests, eliminating the need to write low-level source code. We can easily make, simulate, and test changes in the algorithms on the hardware before attempting flight. The hardware is primarily \" programmed \" using MathWorks Simulink, a block-diagram based tool for modeling, simulating, and analyzing dynamical systems.",
"title": ""
},
{
"docid": "a8981ddf9617beb921f12d5fbddadc56",
"text": "This paper develops an indoor intelligent service mobile robot that has multiple functions, can recognize and grip the target object, avoid obstacles, and accurately localize via relative position. The locating method of the robot uses the output values of the sensor module, which includes data from a gyroscope and a magnetometer, to correct the current rotation direction angle of the robot. An angle correction method can be divided into three parts. The first part calculates the angle values obtained from the gyroscope and the magnetometer that are installed on the robot. The second part explores the error characteristics between the sensor module and the actual rotation direction angle of the robot. The third part uses the error characteristic data to design the fuzzy rule base and the Kalman filter to eliminate errors and to get a more accurate orientation angle. These errors can be described as either regular or irregular. The former can be eliminated by fuzzy algorithm compensation, and the latter can be eliminated by the Kalman filter. The contribution of this paper is to propose an error correction method between the calculus rotation angle determined by the sensor and the actual rotation angle of the robot such that the three moving paths, i.e., specified, actual, and calculus paths, have more accurate approximation. The experimental results demonstrate that the combination of fuzzy compensation and the Kalman filter is an accurate correction method.",
"title": ""
},
{
"docid": "db83ca64b54bbd54b4097df425c48017",
"text": "This paper introduces the application of high-resolution angle estimation algorithms for a 77GHz automotive long range radar sensor. Highresolution direction of arrival (DOA) estimation is important for future safety systems. Using FMCW principle, major challenges discussed in this paper are small number of snapshots, correlation of the signals, and antenna mismatches. Simulation results allow analysis of these effects and help designing the sensor. Road traffic measurements show superior DOA resolution and the feasibility of high-resolution angle estimation.",
"title": ""
},
{
"docid": "8d4bdc3e5e84a63a76e6a226a9f0e558",
"text": "HTTP cookies are the de facto mechanism for session authentication in Web applications. However, their inherent security weaknesses allow attacks against the integrity of Web sessions. HTTPS is often recommended to protect cookies, but deploying full HTTPS support can be challenging due to performance and financial concerns, especially for highly distributed applications. Moreover, cookies can be exposed in a variety of ways even when HTTPS is enabled. In this article, we propose one-time cookies (OTC), a more robust alternative for session authentication. OTC prevents attacks such as session hijacking by signing each user request with a session secret securely stored in the browser. Unlike other proposed solutions, OTC does not require expensive state synchronization in the Web application, making it easily deployable in highly distributed systems. We implemented OTC as a plug-in for the popular WordPress platform and as an extension for Firefox and Firefox for mobile browsers. Our extensive experimental analysis shows that OTC introduces a latency of less than 6 ms when compared to cookies—a negligible overhead for most Web applications. Moreover, we show that OTC can be combined with HTTPS to effectively add another layer of security to Web applications. In so doing, we demonstrate that one-time cookies can significantly improve the security of Web applications with minimal impact on performance and scalability.",
"title": ""
},
{
"docid": "037ff53b19c51dca7ce6418e8dbbc4f8",
"text": "Critical driver genomic events in colorectal cancer have been shown to affect the response to targeted agents that were initially developed under the 'one gene, one drug' paradigm of precision medicine. Our current knowledge of the complexity of the cancer genome, clonal evolution patterns under treatment pressure and pharmacodynamic effects of target inhibition support the transition from a one gene, one drug approach to a 'multi-gene, multi-drug' model when making therapeutic decisions. Better characterization of the transcriptomic subtypes of colorectal cancer, encompassing tumour, stromal and immune components, has revealed convergent pathway dependencies that mandate a 'multi-molecular' perspective for the development of therapies to treat this disease.",
"title": ""
},
{
"docid": "7fece61e99d0b461b04bcf0dfa81639d",
"text": "The rapid advancement of robotics technology in recent years has pushed the development of a distinctive field of robotic applications, namely robotic exoskeletons. Because of the aging population, more people are suffering from neurological disorders such as stroke, central nervous system disorder, and spinal cord injury. As manual therapy seems to be physically demanding for both the patient and therapist, robotic exoskeletons have been developed to increase the efficiency of rehabilitation therapy. Robotic exoskeletons are capable of providing more intensive patient training, better quantitative feedback, and improved functional outcomes for patients compared to manual therapy. This review emphasizes treadmill-based and over-ground exoskeletons for rehabilitation. Analyses of their mechanical designs, actuation systems, and integrated control strategies are given priority because the interactions between these components are crucial for the optimal performance of the rehabilitation robot. The review also discusses the limitations of current exoskeletons and technical challenges faced in exoskeleton development. A general perspective of the future development of more effective robot exoskeletons, specifically real-time biological synergy-based exoskeletons, could help promote brain plasticity among neurologically impaired patients and allow them to regain normal walking ability.",
"title": ""
},
{
"docid": "b18e65ad7982944ef9ad213d98d45dad",
"text": "This paper provides an overview of the physical layer specification of Advanced Television Systems Committee (ATSC) 3.0, the next-generation digital terrestrial broadcasting standard. ATSC 3.0 does not have any backwards-compatibility constraint with existing ATSC standards, and it uses orthogonal frequency division multiplexing-based waveforms along with powerful low-density parity check (LDPC) forward error correction codes similar to existing state-of-the-art. However, it introduces many new technological features such as 2-D non-uniform constellations, improved and ultra-robust LDPC codes, power-based layered division multiplexing to efficiently provide mobile and fixed services in the same radio frequency (RF) channel, as well as a novel frequency pre-distortion multiple-input single-output antenna scheme. ATSC 3.0 also allows bonding of two RF channels to increase the service peak data rate and to exploit inter-RF channel frequency diversity, and to employ dual-polarized multiple-input multiple-output antenna system. Furthermore, ATSC 3.0 provides great flexibility in terms of configuration parameters (e.g., 12 coding rates, 6 modulation orders, 16 pilot patterns, 12 guard intervals, and 2 time interleavers), and also a very flexible data multiplexing scheme using time, frequency, and power dimensions. As a consequence, ATSC 3.0 not only improves the spectral efficiency and robustness well beyond the first generation ATSC broadcast television standard, but also it is positioned to become the reference terrestrial broadcasting technology worldwide due to its unprecedented performance and flexibility. Another key aspect of ATSC 3.0 is its extensible signaling, which will allow including new technologies in the future without disrupting ATSC 3.0 services. This paper provides an overview of the physical layer technologies of ATSC 3.0, covering the ATSC A/321 standard that describes the so-called bootstrap, which is the universal entry point to an ATSC 3.0 signal, and the ATSC A/322 standard that describes the physical layer downlink signals after the bootstrap. A summary comparison between ATSC 3.0 and DVB-T2 is also provided.",
"title": ""
}
] |
scidocsrr
|
d3edf66ce92a20f83b77560e4b234ecc
|
HUNTS: A Trajectory Recommendation System for Effective and Efficient Hunting of Taxi Passengers
|
[
{
"docid": "3bc48489d80e824efb7e3512eafc6f30",
"text": "GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.",
"title": ""
}
] |
[
{
"docid": "6febea92413fa4f20f53fad7cd8e2108",
"text": "To establish fetal nasal bone length cut-off points for first trimester aneuploidy screening based on a normal curve of a Brazilian population. The following tests were proposed: presence or absence of the nasal bone (NB); 2.5 and 5.0 NB percentiles relative to the normal curve; and 0.70, 0.75 and 0.80 multiples of the median (MoM) values defined in the receiver operating characteristic (ROC) curve. Nasal Bone tests were based on positive and negative likelihood ratio value detection rates (LR); the confidence interval was 95% in all tests. Cases in which ultrasonographic images of the NB were absent were not taken into account when evaluating the 2.5 and 5.0 percentiles and the 0.70, 0.75 and 0.80 MoM. The sample consisted of 571 fetuses (10–14 weeks). After exclusions (11) and loss of follow-up (53), the study sample was reduced to 507 patients. There were 23 Down syndrome patients among 41 aneuploid fetuses. The sensitivity of the qualitative NB test (absent vs. present) was 34.1%, and the specificity was 99.1% (+LR 37.89, −LR 0.66). An image of the nasal bone was absent in 52.2% of fetuses with the Down syndrome (+LR 58.00, −LR 0.48). The best tool for aneuploidy screening was the qualitative NB test (absent vs. present). Ultrasonography of the NB is a component of aneuploidy screening, and should not be used alone.",
"title": ""
},
{
"docid": "be99f6ba66d573547a09d3429536049e",
"text": "With the development of sensor, wireless mobile communication, embedded system and cloud computing, the technologies of Internet of Things have been widely used in logistics, Smart Meter, public security, intelligent building and so on. Because of its huge market prospects, Internet of Things has been paid close attention by several governments all over the world, which is regarded as the third wave of information technology after Internet and mobile communication network. Bridging between wireless sensor networks with traditional communication networks or Internet, IOT Gateway plays an important role in IOT applications, which facilitates the seamless integration of wireless sensor networks and mobile communication networks or Internet, and the management and control with wireless sensor networks. In this paper, we proposed an IOT Gateway system based on Zigbee and GPRS protocols according to the typical IOT application scenarios and requirements from telecom operators, presented the data transmission between wireless sensor networks and mobile communication networks, protocol conversion of different sensor network protocols, and control functionalities for sensor networks, and finally gave an implementation of prototyping system and system validation.",
"title": ""
},
{
"docid": "aea93496d1ff9638af76150c2cfaaa1a",
"text": "This study pursues the optimization of the brain responses to small reversing patterns in a Steady-State Visual Evoked Potentials (SSVEP) paradigm, which could be used to maximize the efficiency of applications such as Brain-Computer Interfaces (BCI). We investigated the SSVEP frequency response for 32 frequencies (5-84 Hz), and the time dynamics of the brain response at 8, 14 and 28 Hz, to aid the definition of the optimal neurophysiological parameters and to outline the onset-delay and other limitations of SSVEP stimuli in applications such as our previously described four-command BCI system. Our results showed that the 5.6-15.3 Hz pattern reversal stimulation evoked the strongest responses, peaking at 12 Hz, and exhibiting weaker local maxima at 28 and 42 Hz. After stimulation onset, the long-term SSVEP response was highly non-stationary and the dynamics, including the first peak, was frequency-dependent. The evaluation of the performance of a frequency-optimized eight-command BCI system with dynamic neurofeedback showed a mean success rate of 98%, and a time delay of 3.4s. Robust BCI performance was achieved by all subjects even when using numerous small patterns clustered very close to each other and moving rapidly in 2D space. These results emphasize the need for SSVEP applications to optimize not only the analysis algorithms but also the stimuli in order to maximize the brain responses they rely on.",
"title": ""
},
{
"docid": "95c666f41a0b5b0027ad3714f25e5ac2",
"text": "mlpy is a Python Open Source Machine Learning library built on top of NumPy/SciPy and the GNU Scientific Libraries. mlpy provides a wide range of state-of-the-art machine learning methods for supervised and unsupervised problems and it is aimed at finding a reasonable compromise among modularity, maintainability, reproducibility, usability and efficiency. mlpy is multiplatform, it works with Python 2 and 3 and it is distributed under GPL3 at the website http://mlpy.fbk.eu.",
"title": ""
},
{
"docid": "d312d2976737edfba3b82594541a7233",
"text": "We present a novel technique to remove spurious ambiguity fr om t ansition systems for dependency parsing. Our technique chooses a canonical sequence of transition opera tions (computation) for a given dependency tree. Our technique can be applied to a large class of bottom-up transi io systems, including for instance Nivre [2004] and Attardi [2006].",
"title": ""
},
{
"docid": "8decac4ff789460595664a38e7527ed6",
"text": "Unit selection synthesis has shown itself to be capable of producing high quality natural sounding synthetic speech when constructed from large databases of well-recorded, well-labeled speech. However, the cost in time and expertise of building such voices is still too expensive and specialized to be able to build individual voices for everyone. The quality in unit selection synthesis is directly related to the quality and size of the database used. As we require our speech synthesizers to have more variation, style and emotion, for unit selection synthesis, much larger databases will be required. As an alternative, more recently we have started looking for parametric models for speech synthesis, that are still trained from databases of natural speech but are more robust to errors and allow for better modeling of variation. This paper presents the CLUSTERGEN synthesizer which is implemented within the Festival/FestVox voice building environment. As well as the basic technique, three methods of modeling dynamics in the signal are presented and compared: a simple point model, a basic trajectory model and a trajectory model with overlap and add.",
"title": ""
},
{
"docid": "767de215cc843a255aa31ee3b45cc373",
"text": "Breast cancer is the most frequently diagnosed cancer and leading cause of cancer-related death among females worldwide. In this article, we investigate the applicability of densely connected convolutional neural networks to the problems of histology image classification and whole slide image segmentation in the area of computer-aided diagnoses for breast cancer. To this end, we study various approaches for transfer learning and apply them to the data set from the 2018 grand challenge on breast cancer histology images (BACH).",
"title": ""
},
{
"docid": "c8e029658bf4c298cb6e77128d19eac0",
"text": "Cloud Computing Business Framework (CCBF) is proposed to help organisations achieve good Cloud design, deployment, migration and services. While organisations adopt Cloud Computing for Web Services, technical and business challenges emerge and one of these includes the measurement of Cloud business performance. Organisational Sustainability Modelling (OSM) is a new way to measure Cloud business performance quantitatively and accurately. It combines statistical computation and 3D Visualisation to present the Return on Investment arising from the adoption of Cloud Computing by organisations. 3D visualisation simplifies the review process and is an innovative way for Return of Investment (ROI) valuation. Two detailed case studies with SAP and Vodafone have been presented, where OSM has analysed the business performance and explained how CCBF offers insights, which are relatively helpful for WS and Grid businesses. Comparisons and discussions between CCBF and other approaches related to WS are presented, where lessons learned are useful for Web Services, Cloud and Grid communities.",
"title": ""
},
{
"docid": "3ab1e2768c1f612f1f85ddb192b37e1f",
"text": "The vertical Cup-to-Disc Ratio (CDR) is an important indicator in the diagnosis of glaucoma. Automatic segmentation of the optic disc (OD) and optic cup is crucial towards a good computer-aided diagnosis (CAD) system. This paper presents a statistical model-based method for the segmentation of the optic disc and optic cup from digital color fundus images. The method combines knowledge-based Circular Hough Transform and a novel optimal channel selection for segmentation of the OD. Moreover, we extended the method to optic cup segmentation, which is a more challenging task. The system was tested on a dataset of 325 images. The average Dice coefficient for the disc and cup segmentation is 0.92 and 0.81 respectively, which improves significantly over existing methods. The proposed method has a mean absolute CDR error of 0.10, which outperforms existing methods. The results are promising and thus demonstrate a good potential for this method to be used in a mass screening CAD system.",
"title": ""
},
{
"docid": "4331746158d056ffdb5a47b56257aa2c",
"text": "This paper presents an analysis of the effect of duty ratio on power loss and efficiency of the Class-E amplifier. Conduction loss for each Class-E circuit component is derived and total amplifier losses and efficiency are expressed as functions of duty ratio. Two identical 300-W Class-E amplifiers operating at 7.29 MHz are designed, constructed, and tested in the laboratory. Dependence of total efficiency upon duty ratio when using real components is derived and verified experimentally. Derived loss and efficiency equations demonstrate rapid drop in efficiency for low duty ratio (below approximately 30%). Experimental results very closely matched calculated power loss and efficiency.",
"title": ""
},
{
"docid": "d627a80d6653e9c8d1374d293ffe6c5c",
"text": "Fine-grained vehicle model recognition is a challenging problem in intelligent transportation systems due to the subtle intra-category appearance variation. In this paper, we demonstrate that this problem can be addressed by locating discriminative parts, where the most significant appearance variation appears, based on the large-scale training set. We also propose a corresponding coarse-to-fine method to achieve this, in which these discriminative regions are detected automatically based on feature maps extracted by convolutional neural network. A mapping from feature maps to the input image is established to locate the regions, and these regions are repeatedly refined until there are no more qualified ones. The global and local features are then extracted from the whole vehicle images and the detected regions, respectively. Based upon the holistic cues and the subordinate-level variation within these global and local features, an one-versus-all support vector machine classifier is applied for classification. The experimental results show that our framework outperforms most of the state-of-the-art approaches, achieving 98.29% accuracy over 281 vehicle makes and models.",
"title": ""
},
{
"docid": "a278abfa0501077eb2f71cbb272689d6",
"text": "Among the many emerging non-volatile memory technologies, chalcogenide (i.e. GeSbTe/GST) based phase change random access memory (PRAM) has shown particular promise. While accurate simulations are required for reducing programming current and enabling higher integration density, many challenges remain for improved simulation of PRAM cell operation including nanoscale thermal conduction and phase change. This work simulates the fully coupled electrical and thermal transport and phase change in 2D PRAM geometries, with specific attention to the impact of thermal boundary resistance between the GST and surrounding materials. For GST layer thicknesses between 25 and 75nm, the interface resistance reduces the predicted programming current and power by 31% and 53%, respectively, for a typical reset transition. The calculations also show the large sensitivity of programming voltage to the GST thermal conductivity. These results show the importance of temperature-dependent thermal properties of materials and interfaces in PRAM cells",
"title": ""
},
{
"docid": "b27b164a7ff43b8f360167e5f886f18a",
"text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.",
"title": ""
},
{
"docid": "cb66a49205c9914be88a7631ecc6c52a",
"text": "BACKGROUND\nMidline facial clefts are rare and challenging deformities caused by failure of fusion of the medial nasal prominences. These anomalies vary in severity, and may include microform lines or midline lip notching, incomplete or complete labial clefting, nasal bifidity, or severe craniofacial bony and soft tissue anomalies with orbital hypertelorism and frontoethmoidal encephaloceles. In this study, the authors present 4 cases, classify the spectrum of midline cleft anomalies, and review our technical approaches to the surgical correction of midline cleft lip and bifid nasal deformities. Embryology and associated anomalies are discussed.\n\n\nMETHODS\nThe authors retrospectively reviewed our experience with 4 cases of midline cleft lip with and without nasal deformities of varied complexity. In addition, a comprehensive literature search was performed, identifying studies published relating to midline cleft lip and/or bifid nose deformities. Our assessment of the anomalies in our series, in conjunction with published reports, was used to establish a 5-tiered classification system. Technical approaches and clinical reports are described.\n\n\nRESULTS\nFunctional and aesthetic anatomic correction was successfully achieved in each case without complication. A classification and treatment strategy for the treatment of midline cleft lip and bifid nose deformity is presented.\n\n\nCONCLUSIONS\nThe successful treatment of midline cleft lip and bifid nose deformities first requires the identification and classification of the wide variety of anomalies. With exposure of abnormal nasolabial anatomy, the excision of redundant skin and soft tissue, anatomic approximation of cartilaginous elements, orbicularis oris muscle repair, and craniofacial osteotomy and reduction as indicated, a single-stage correction of midline cleft lip and bifid nasal deformity can be safely and effectively achieved.",
"title": ""
},
{
"docid": "555f06011d03cbe8dedb2fcd198540e9",
"text": "We focus on the challenging task of real-time semantic segmentation in this paper. It finds many practical applications and yet is with fundamental difficulty of reducing a large portion of computation for pixel-wise label inference. We propose an image cascade network (ICNet) that incorporates multi-resolution branches under proper label guidance to address this challenge. We provide in-depth analysis of our framework and introduce the cascade feature fusion unit to quickly achieve highquality segmentation. Our system yields real-time inference on a single GPU card with decent quality results evaluated on challenging datasets like Cityscapes, CamVid and COCO-Stuff.",
"title": ""
},
{
"docid": "e4dc1f30a914dc6f710f23b5bc047978",
"text": "Intelligence, expertise, ability and talent, as these terms have traditionally been used in education and psychology, are socially agreed upon labels that minimize the dynamic, evolving, and contextual nature of individual–environment relations. These hypothesized constructs can instead be described as functional relations distributed across whole persons and particular contexts through which individuals appear knowledgeably skillful. The purpose of this article is to support a concept of ability and talent development that is theoretically grounded in 5 distinct, yet interrelated, notions: ecological psychology, situated cognition, distributed cognition, activity theory, and legitimate peripheral participation. Although talent may be reserved by some to describe individuals possessing exceptional ability and ability may be described as an internal trait, in our description neither ability nor talent are possessed. Instead, they are treated as equivalent terms that can be used to describe functional transactions that are situated across person-in-situation. Further, and more important, by arguing that ability is part of the individual–environment transaction, we take the potential to appear talented out of the hands (or heads) of the few and instead treat it as an opportunity that is available to all although it may be actualized more frequently by some.",
"title": ""
},
{
"docid": "2b745b41b0495ab7adad321080ce2228",
"text": "In any teaching and learning setting, there are some variables that play a highly significant role in both teachers’ and learners’ performance. Two of these influential psychological domains in educational context include self-efficacy and burnout. This study is conducted to investigate the relationship between the self-efficacy of Iranian teachers of English and their reports of burnout. The data was collected through application of two questionnaires. The Maslach Burnout Inventory (MBI; Maslach& Jackson 1981, 1986) and Teacher Efficacy Scales (Woolfolk& Hoy, 1990) were administered to ten university teachers. After obtaining the raw data, the SPSS software (version 16) was used to change the data into numerical interpretable forms. In order to determine the relationship between self-efficacy and teachers’ burnout, correlational analysis was employed. The results showed that participants’ self-efficacy has a reverse relationship with their burnout.",
"title": ""
},
{
"docid": "b38529e74442de80822204b63d061e3e",
"text": "Factors other than age and genetics may increase the risk of developing Alzheimer disease (AD). Accumulation of the amyloid-β (Aβ) peptide in the brain seems to initiate a cascade of key events in the pathogenesis of AD. Moreover, evidence is emerging that the sleep–wake cycle directly influences levels of Aβ in the brain. In experimental models, sleep deprivation increases the concentration of soluble Aβ and results in chronic accumulation of Aβ, whereas sleep extension has the opposite effect. Furthermore, once Aβ accumulates, increased wakefulness and altered sleep patterns develop. Individuals with early Aβ deposition who still have normal cognitive function report sleep abnormalities, as do individuals with very mild dementia due to AD. Thus, sleep and neurodegenerative disease may influence each other in many ways that have important implications for the diagnosis and treatment of AD.",
"title": ""
},
{
"docid": "9855d5b08e46b454a519b0c245e52ccc",
"text": "Sparse matrix vector multiplication (SpMV) kernel is a key computation in linear algebra. Most iterative methods are composed of SpMV operations with BLAS1 updates. Therefore, researchers make extensive efforts to optimize the SpMV kernel in sparse linear algebra. With the appearance of OpenCL, a programming language that standardizes parallel programming across a wide variety of heterogeneous platforms, we are able to optimize the SpMV kernel on many different platforms. In this paper, we propose a new sparse matrix format, the Cocktail Format, to take advantage of the strengths of many different sparse matrix formats. Based on the Cocktail Format, we develop the clSpMV framework that is able to analyze all kinds of sparse matrices at runtime, and recommend the best representations of the given sparse matrices on different platforms. Although solutions that are portable across diverse platforms generally provide lower performance when compared to solutions that are specialized to particular platforms, our experimental results show that clSpMV can find the best representations of the input sparse matrices on both Nvidia and AMD platforms, and deliver 83% higher performance compared to the vendor optimized CUDA implementation of the proposed hybrid sparse format in [3], and 63.6% higher performance compared to the CUDA implementations of all sparse formats in [3].",
"title": ""
},
{
"docid": "8ca8d0bb6ef41b10392e5d64ca96d2ab",
"text": "This longitudinal study provides an analysis of the relationship between personality traits and work experiences with a special focus on the relationship between changes in personality and work experiences in young adulthood. Longitudinal analyses uncovered 3 findings. First, measures of personality taken at age 18 predicted both objective and subjective work experiences at age 26. Second, work experiences were related to changes in personality traits from age 18 to 26. Third, the predictive and change relations between personality traits and work experiences were corresponsive: Traits that \"selected\" people into specific work experiences were the same traits that changed in response to those same work experiences. The relevance of the findings to theories of personality development is discussed.",
"title": ""
}
] |
scidocsrr
|
0802e3c8c5b07284ddadb0a7e110972b
|
ARMin II - 7 DoF rehabilitation robot: mechanics and kinematics
|
[
{
"docid": "1b8e90d78ca21fcaa5cca628cba4111a",
"text": "The Rutgers Master II-ND glove is a haptic interface designed for dextrous interactions with virtual environments. The glove provides force feedback up to 16 N each to the thumb, index, middle, and ring fingertips. It uses custom pneumatic actuators arranged in a direct-drive configuration in the palm. Unlike commercial haptic gloves, the direct-drive actuators make unnecessary cables and pulleys, resulting in a more compact and lighter structure. The force-feedback structure also serves as position measuring exoskeleton, by integrating noncontact Hall-effect and infrared sensors. The glove is connected to a haptic-control interface that reads its sensors and servos its actuators. The interface has pneumatic servovalves, signal conditioning electronics, A/D/A boards, power supply and an imbedded Pentium PC. This distributed computing assures much faster control bandwidth than would otherwise be possible. Communication with the host PC is done over an RS232 line. Comparative data with the CyberGrasp commercial haptic glove is presented.",
"title": ""
}
] |
[
{
"docid": "3e9845c255b5e816741c04c4f7cf5295",
"text": "This paper presents the packaging technology and the integrated antenna design for a miniaturized 122-GHz radar sensor. The package layout and the assembly process are shortly explained. Measurements of the antenna including the flip chip interconnect are presented that have been achieved by replacing the IC with a dummy chip that only contains a through-line. Afterwards, radiation pattern measurements are shown that were recorded using the radar sensor as transmitter. Finally, details of the fully integrated radar sensor are given, together with results of the first Doppler measurements.",
"title": ""
},
{
"docid": "c5c62c1cee291e8ba9e3ed6e04da146d",
"text": "Traumatic brain injury (TBI) is a leading cause of death and disability among persons in the United States. Each year, an estimated 1.5 million Americans sustain a TBI. As a result of these injuries, 50,000 people die, 230,000 people are hospitalized and survive, and an estimated 80,000-90,000 people experience the onset of long-term disability. Rates of TBI-related hospitalization have declined nearly 50% since 1980, a phenomenon that may be attributed, in part, to successes in injury prevention and also to changes in hospital admission practices that shift the care of persons with less severe TBI from inpatient to outpatient settings. The magnitude of TBI in the United States requires public health measures to prevent these injuries and to improve their consequences. State surveillance systems can provide reliable data on injury causes and risk factors, identify trends in TBI incidence, enable the development of cause-specific prevention strategies focused on populations at greatest risk, and monitor the effectiveness of such programs. State follow-up registries, built on surveillance systems, can provide more information regarding the frequency and nature of disabilities associated with TBI. This information can help states and communities to design, implement, and evaluate cost-effective programs for people living with TBI and for their families, addressing acute care, rehabilitation, and vocational, school, and community support.",
"title": ""
},
{
"docid": "a33ccc1d1f906b2f09669166a1fe093c",
"text": "A writer’s style depends not just on personal traits but also on her intent and mental state. In this paper, we show how variants of the same writing task can lead to measurable differences in writing style. We present a case study based on the story cloze task (Mostafazadeh et al., 2016a), where annotators were assigned similar writing tasks with different constraints: (1) writing an entire story, (2) adding a story ending for a given story context, and (3) adding an incoherent ending to a story. We show that a simple linear classifier informed by stylistic features is able to successfully distinguish among the three cases, without even looking at the story context. In addition, combining our stylistic features with language model predictions reaches state of the art performance on the story cloze challenge. Our results demonstrate that different task framings can dramatically affect the way people write.",
"title": ""
},
{
"docid": "2caea7f13980ea4a48fb8e8bb71842f1",
"text": "Internet of Things, commonly known as IoT is a promising area in technology that is growing day by day. It is a concept whereby devices connect with each other or to living things. Internet of Things has shown its great benefits in today’s life. Agriculture is one amongst the sectors which contributes a lot to the economy of Mauritius and to get quality products, proper irrigation has to be performed. Hence proper water management is a must because Mauritius is a tropical island that has gone through water crisis since the past few years. With the concept of Internet of Things and the power of the cloud, it is possible to use low cost devices to monitor and be informed about the status of an agricultural area in real time. Thus, this paper provides the design and implementation of a Smart Irrigation and Monitoring System which makes use of Microsoft Azure machine learning to process data received from sensors in the farm and weather forecasting data to better inform the farmers on the appropriate moment to start irrigation. The Smart Irrigation and Monitoring System is made up of sensors which collect data such as air humidity, air temperature, and most importantly soil moisture data. These data are used to monitor the air quality and water content of the soil. The raw data are transmitted to the",
"title": ""
},
{
"docid": "554b82dc9820bae817bac59e81bf798a",
"text": "This paper proposed a 4-channel parallel 40 Gb/s front-end amplifier (FEA) in optical receiver for parallel optical transmission system. A novel enhancement type regulated cascade (ETRGC) configuration with an active inductor is originated in this paper for the transimpedance amplifier to significantly increase the bandwidth. The technique of three-order interleaving active feedback expands the bandwidth of the gain stage of transimpedance amplifier and limiting amplifier. Experimental results show that the output swing is 210 mV (Vpp) when the input voltage varies from 5 mV to 500 mV. The power consumption of the 4-channel parallel 40 Gb/s front-end amplifier (FEA) is 370 mW with 1.8 V power supply and the chip area is 650 μm×1300 μm.",
"title": ""
},
{
"docid": "1e92b67253b520187c923ba92e7f30d1",
"text": "Availability of high speed internet and wide use of mobile phones leads to gain the popularity to IoT. One such important concept of the same is the use of mobile phones by working parents to watch the activities of baby while babysitting. This paper presents the design of Smart Cradle which supports such video monitoring. This cradle swings automatically on detection of baby cry sound. Also it activates buzzer and gives alerts on phone if-first, baby cry continues till specific time which means now cradle cannot handle baby and baby needs personal attention and second, if the mattress in the cradle is wet. This cradle has an automatic rotating toy for baby's entertainment which will reduce the baby cry possibility.",
"title": ""
},
{
"docid": "b91f80bc17de9c4e15ec80504e24b045",
"text": "Motivated by the design of the well-known Enigma machine, we present a novel ultra-lightweight encryption scheme, referred to as Hummingbird, and its applications to a privacy-preserving identification and mutual authentication protocol for RFID applications. Hummingbird can provide the designed security with a small block size and is therefore expected to meet the stringent response time and power consumption requirements described in the ISO protocol without any modification of the current standard. We show that Hummingbird is resistant to the most common attacks such as linear and differential cryptanalysis. Furthermore, we investigate some properties for integrating the Hummingbird into a privacypreserving identification and mutual authentication protocol.",
"title": ""
},
{
"docid": "c4df97f3db23c91f0ce02411d2e1e999",
"text": "One important challenge for probabilistic logics is reasoning with very large knowledge bases (KBs) of imperfect information, such as those produced by modern web-scale information extraction systems. One scalability problem shared by many probabilistic logics is that answering queries involves “grounding” the query—i.e., mapping it to a propositional representation—and the size of a “grounding” grows with database size. To address this bottleneck, we present a first-order probabilistic language called ProPPR in which approximate “local groundings” can be constructed in time independent of database size. Technically, ProPPR is an extension to stochastic logic programs that is biased towards short derivations; it is also closely related to an earlier relational learning algorithm called the path ranking algorithm. We show that the problem of constructing proofs for this logic is related to computation of personalized PageRank on a linearized version of the proof space, and based on this connection, we develop a provably-correct approximate grounding scheme, based on the PageRank–Nibble algorithm. Building on this, we develop a fast and easily-parallelized weight-learning algorithm for ProPPR. In our experiments, we show that learning for ProPPR is orders of magnitude faster than learning for Markov logic networks; that allowing mutual recursion (joint learning) in KB inference leads to improvements in performance; and that ProPPR can learn weights for a mutually recursive program with hundreds of clauses defining scores of interrelated predicates over a KB containing one million entities.",
"title": ""
},
{
"docid": "0a7a2cfe41f1a04982034ef9cb42c3d4",
"text": "The biocontrol agent Torymus sinensis has been released into Japan, the USA, and Europe to suppress the Asian chestnut gall wasp, Dryocosmus kuriphilus. In this study, we provide a quantitative assessment of T. sinensis effectiveness for suppressing gall wasp infestations in Northwest Italy by annually evaluating the percentage of chestnuts infested by D. kuriphilus (infestation rate) and the number of T. sinensis adults that emerged per 100 galls (emergence index) over a 9-year period. We recorded the number of T. sinensis adults emerging from a total of 64,000 galls collected from 23 sampling sites. We found that T. sinensis strongly reduced the D. kuriphilus population, as demonstrated by reduced galls and an increased T. sinensis emergence index. Specifically, in Northwest Italy, the infestation rate was nearly zero 9 years after release of the parasitoid with no evidence of resurgence in infestation levels. In 2012, the number of T. sinensis females emerging per 100 galls was approximately 20 times higher than in 2009. Overall, T. sinensis proved to be an outstanding biocontrol agent, and its success highlights how the classical biological control approach may represent a cost-effective tool for managing an exotic invasive pest.",
"title": ""
},
{
"docid": "5527521d567290192ea26faeb6e7908c",
"text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.",
"title": ""
},
{
"docid": "0b41c2e8be4b9880a834b44375eb6c75",
"text": "We propose AliMe Chat, an open-domain chatbot engine that integrates the joint results of Information Retrieval (IR) and Sequence to Sequence (Seq2Seq) based generation models. AliMe Chat uses an attentive Seq2Seq based rerank model to optimize the joint results. Extensive experiments show our engine outperforms both IR and generation based models. We launch AliMe Chat for a real-world industrial application and observe better results than another public chatbot.",
"title": ""
},
{
"docid": "181463723aaaf766e387ea292cba8d5d",
"text": "Computational thinking has been promoted in recent years as a skill that is as fundamental as being able to read, write, and do arithmetic. However, what computational thinking really means remains speculative. While wonders, discussions and debates will likely continue, this article provides some analysis aimed to further the understanding of the notion. It argues that computational thinking is likely a hybrid thinking paradigm that must accommodate different thinking modes in terms of the way each would influence what we do in computation. Furthermore, the article makes an attempt to define computational thinking and connect the (potential) thinking elements to the known thinking paradigms. Finally, the author discusses some implications of the analysis.",
"title": ""
},
{
"docid": "77cea98467305b9b3b11de8d3cec6ec2",
"text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.",
"title": ""
},
{
"docid": "7cc3da275067df8f6c017da37025856c",
"text": "A simple, green method is described for the synthesis of Gold (Au) and Silver (Ag) nanoparticles (NPs) from the stem extract of Breynia rhamnoides. Unlike other biological methods for NP synthesis, the uniqueness of our method lies in its fast synthesis rates (~7 min for AuNPs) and the ability to tune the nanoparticle size (and subsequently their catalytic activity) via the extract concentration used in the experiment. The phenolic glycosides and reducing sugars present in the extract are largely responsible for the rapid reduction rates of Au(3+) ions to AuNPs. Efficient reduction of 4-nitrophenol (4-NP) to 4-aminophenol (4-AP) in the presence of AuNPs (or AgNPs) and NaBH(4) was observed and was found to depend upon the nanoparticle size or the stem extract concentration used for synthesis.",
"title": ""
},
{
"docid": "ed6543545ec40cf1b197dcd31bcad9d5",
"text": "Stroke is the leading cause of death and adult disability worldwide. Mitochondrial dysfunction has been regarded as one of the hallmarks of ischemia/reperfusion (I/R) induced neuronal death. Maintaining the function of mitochondria is crucial in promoting neuron survival and neurological improvement. In this article, we review current progress regarding the roles of mitochondria in the pathological process of cerebral I/R injury. In particular, we emphasize on the most critical mechanisms responsible for mitochondrial quality control, as well as the recent findings on mitochondrial transfer in acute stroke. We highlight the potential of mitochondria as therapeutic targets for stroke treatment and provide valuable insights for clinical strategies.",
"title": ""
},
{
"docid": "02fd763f6e15b07187e3cbe0fd3d0e18",
"text": "The Batcher`s bitonic sorting algorithm is a parallel sorting algorithm, which is used for sorting the numbers in modern parallel machines. There are various parallel sorting algorithms such as radix sort, bitonic sort, etc. It is one of the efficient parallel sorting algorithm because of load balancing property. It is widely used in various scientific and engineering applications. However, Various researches have worked on a bitonic sorting algorithm in order to improve up the performance of original batcher`s bitonic sorting algorithm. In this paper, tried to review the contribution made by these researchers.",
"title": ""
},
{
"docid": "47d997ef6c4f70105198415002c2c5dc",
"text": "The potential of using of millimeter wave (mmWave) frequency for future wireless cellular communication systems has motivated the study of large-scale antenna arrays for achieving highly directional beamforming. However, the conventional fully digital beamforming methods which require one radio frequency (RF) chain per antenna element is not viable for large-scale antenna arrays due to the high cost and high power consumption of RF chain components in high frequencies. To address the challenge of this hardware limitation, this paper considers a hybrid beamforming architecture in which the overall beamformer consists of a low-dimensional digital beamformer followed by an RF beamformer implemented using analog phase shifters. Our aim is to show that such an architecture can approach the performance of a fully digital scheme with much fewer number of RF chains. Specifically, this paper establishes that if the number of RF chains is twice the total number of data streams, the hybrid beamforming structure can realize any fully digital beamformer exactly, regardless of the number of antenna elements. For cases with fewer number of RF chains, this paper further considers the hybrid beamforming design problem for both the transmission scenario of a point-to-point multiple-input multiple-output (MIMO) system and a downlink multi-user multiple-input single-output (MU-MISO) system. For each scenario, we propose a heuristic hybrid beamforming design that achieves a performance close to the performance of the fully digital beamforming baseline. Finally, the proposed algorithms are modified for the more practical setting in which only finite resolution phase shifters are available. Numerical simulations show that the proposed schemes are effective even when phase shifters with very low resolution are used.",
"title": ""
},
{
"docid": "2f1acb3378e5281efac7db5b3371b131",
"text": "Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires no explicit uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves stateof-the-art performance when only one million or fewer samples are permitted on a range of continuous control benchmark tasks.1",
"title": ""
},
{
"docid": "e2f5feaa4670bc1ae21d7c88f3d738e3",
"text": "Orofacial clefts are common birth defects and can occur as isolated, nonsyndromic events or as part of Mendelian syndromes. There is substantial phenotypic diversity in individuals with these birth defects and their family members: from subclinical phenotypes to associated syndromic features that is mirrored by the many genes that contribute to the etiology of these disorders. Identification of these genes and loci has been the result of decades of research using multiple genetic approaches. Significant progress has been made recently due to advances in sequencing and genotyping technologies, primarily through the use of whole exome sequencing and genome-wide association studies. Future progress will hinge on identifying functional variants, investigation of pathway and other interactions, and inclusion of phenotypic and ethnic diversity in studies.",
"title": ""
},
{
"docid": "4ec7480aeb1b3193d760d554643a1660",
"text": "The ability to learn is arguably the most crucial aspect of human intelligence. In reinforcement learning, we attempt to formalize a certain type of learning that is based on rewards and penalties. These supervisory signals should guide an agent to learn optimal behavior. In particular, this research focuses on deep reinforcement learning, where the agent should learn to play video games solely from pixel input. This thesis contributes to deep reinforcement learning research by assessing several variations to an existing state-of-the-art algorithm. First, we provide an extensive analysis on how the design decisions of the agent’s deep neural network affect its performance. Second, we introduce a novel neural layer that allows for local specializations in the visual input of the agents, as opposed to the global weight sharing that occurs in convolutional layers. Third, we introduce a ‘what’ and ‘where’ neural network architecture, inspired by the information flow of the visual cortical areas in the human brain. Finally, we explore prototype based deep reinforcement learning by introducing a novel output layer that is largely inspired by learning vector quantization. In a subset of our experiments, we show substantial improvements compared to existing alternatives.",
"title": ""
}
] |
scidocsrr
|
0132c16711e8c1a2aae0773f50a811fd
|
Recognition of Emotion from Speech: A Review
|
[
{
"docid": "dadcecd178721cf1ea2b6bf51bc9d246",
"text": "8 Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect 9 of substantial applications, notably in human–computer interaction. Progress in the area relies heavily on the devel10 opment of appropriate databases. This paper addresses four main issues that need to be considered in developing 11 databases of emotional speech: scope, naturalness, context and descriptors. The state of the art is reviewed. A good deal 12 has been done to address the key issues, but there is still a long way to go. The paper shows how the challenge of 13 developing appropriate databases is being addressed in three major recent projects––the Reading–Leeds project, the 14 Belfast project and the CREST–ESP project. From these and other studies the paper draws together the tools and 15 methods that have been developed, addresses the problems that arise and indicates the future directions for the de16 velopment of emotional speech databases. 2002 Published by Elsevier Science B.V.",
"title": ""
}
] |
[
{
"docid": "7b3da375a856a53b8a303438d015dffe",
"text": "Social Networking Sites (SNS), such as Facebook and LinkedIn, have become the established place for keeping contact with old friends and meeting new acquaintances. As a result, a user leaves a big trail of personal information about him and his friends on the SNS, sometimes even without being aware of it. This information can lead to privacy drifts such as damaging his reputation and credibility, security risks (for instance identity theft) and profiling risks. In this paper, we first highlight some privacy issues raised by the growing development of SNS and identify clearly three privacy risks. While it may seem a priori that privacy and SNS are two antagonist concepts, we also identified some privacy criteria that SNS could fulfill in order to be more respectful of the privacy of their users. Finally, we introduce the concept of a Privacy-enhanced Social Networking Site (PSNS) and we describe Privacy Watch, our first implementation of a PSNS.",
"title": ""
},
{
"docid": "c3cc032538a10ab2f58ff45acb6d16d0",
"text": "How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.",
"title": ""
},
{
"docid": "6ff03a254bab9d3484ff198fd7cf9033",
"text": "The fake news epidemic makes it imperative to develop a diagnostic framework that is both parsimonious and valid to guide present and future efforts in fake news detection. This paper represents one of the very first attempts to fill a void in the research on this topic. The LeSiE (Lexical Structure, Simplicity, Emotion) framework we created and validated allows lay people to identify potential fake news without the use of calculators or complex statistics by looking out for three simple cues. Introduction A panel of experts convened by the BBC in 2016 named the breakdown of trusted sources of information as one of the most pressing societal challenges in the 21 century. In the same year, the Oxford Dictionaries named “post-truth” the “word of the year”. The outcomes of two of the most momentous events in 2016 — the US presidential election and Brexit — were thought to have been significantly influenced by the prevalence of fake news surrounding both events. The speed with which misinformation make its way online and find an audience is unprecedented in the history of communication. This phenomenon is fuelled by a combination of the ubiquity of social networks, credulous online media, and the peculiarities of human information processing (Silverman, 2015). As fake news articles are unconstrained by reality or facts, they can be crafted with considerable latitude to appeal to hopes, fears, wishes and curiosity, which in turn drives online virality and engagement (Silverman, 2015). This is complicated by the popularity of rumors as conversational currency in interpersonal interactions, especially under conditions of uncertainty (e.g. Southwell, 2013). Fake news viewing has been shown to foster feelings of inefficacy, alienation, and cynicism (Balmas, 2014). Moreover, fakes news (if plausible) can create a complex and demanding rhetorical situation for organizations (Veil et al., 2012), such that organizational legitimacy could be threatened and undermined (Sellnow, Littlefield, Vidolof, & Webb, 2009). On the political/international stage, the consequences of fake news can take on epic proportions: for example, over half of those who recalled seeing fake news about Donald Trump and Hilary Clinton during the US presidential election 1 Director of Operations and Technology, SSON Analytics. 2 Associate Professor, Corporate Communication (Practice), Singapore Management University.",
"title": ""
},
{
"docid": "a0ee3b5e97fcfae9486396af410cb363",
"text": "We present a freeform modeling framework for unstructured triangle meshes which is based on constraint shape optimization. The goal is to simplify the user interaction even for quite complex freeform or multiresolution modifications. The user first sets various boundary constraints to define a custom tailored (abstract) basis function which is adjusted to a given design task. The actual modification is then controlled by moving one single 9-dof manipulator object. The technique can handle arbitrary support regions and piecewise boundary conditions with smoothness ranging continuously from C0 to C2. To more naturally adapt the modification to the shape of the support region, the deformed surface can be tuned to bend with anisotropic stiffness. We are able to achieve real-time response in an interactive design session even for complex meshes by precomputing a set of scalar-valued basis functions that correspond to the degrees of freedom of the manipulator by which the user controls the modification.",
"title": ""
},
{
"docid": "dd36b71a91aa0b8b818ab6b4e6eb39c2",
"text": "Facial beauty prediction (FBP) is a significant visual recognition problem to make assessment of facial attractiveness that is consistent to human perception. To tackle this problem, various data-driven models, especially state-of-the-art deep learning techniques, were introduced, and benchmark dataset become one of the essential elements to achieve FBP. Previous works have formulated the recognition of facial beauty as a specific supervised learning problem of classification, regression or ranking, which indicates that FBP is intrinsically a computation problem with multiple paradigms. However, most of FBP benchmark datasets were built under specific computation constrains, which limits the performance and flexibility of the computational model trained on the dataset. In this paper, we argue that FBP is a multi-paradigm computation problem, and propose a new diverse benchmark dataset, called SCUT-FBP5500, to achieve multi-paradigm facial beauty prediction. The SCUT-FBP5500 dataset has totally 5500 frontal faces with diverse properties (male/female, Asian/Caucasian, ages) and diverse labels (face landmarks, beauty scores within [1], [5], beauty score distribution), which allows different computational models with different FBP paradigms, such as appearance-based/shape-based facial beauty classification/regression model for male/female of Asian/Caucasian. We evaluated the SCUT-FBP5500 dataset for FBP using different combinations of feature and predictor, and various deep learning methods. The results indicates the improvement of FBP and the potential applications based on the SCUT-FBP5500.",
"title": ""
},
{
"docid": "3b80d6b7cd4b9b0225cff5a4466bb390",
"text": "A large number of objectives have been proposed to train latent variable generative models. We show that many of them are Lagrangian dual functions of the same primal optimization problem. The primal problem optimizes the mutual information between latent and visible variables, subject to the constraints of accurately modeling the data distribution and performing correct amortized inference. By choosing to maximize or minimize mutual information, and choosing different Lagrange multipliers, we obtain different objectives including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, beta-VAE, adversarial autoencoders, AVB, AS-VAE and InfoVAE. Based on this observation, we provide an exhaustive characterization of the statistical and computational trade-offs made by all the training objectives in this class of Lagrangian duals. Next, we propose a dual optimization method where we optimize model parameters as well as the Lagrange multipliers. This method achieves Pareto optimal solutions in terms of optimizing information and satisfying the constraints.",
"title": ""
},
{
"docid": "741a897b87cc76d68f5400974eee6b32",
"text": "Numerous techniques exist to augment the security functionality of Commercial O -The-Shelf (COTS) applications and operating systems, making them more suitable for use in mission-critical systems. Although individually useful, as a group these techniques present di culties to system developers because they are not based on a common framework which might simplify integration and promote portability and reuse. This paper presents techniques for developing Generic Software Wrappers { protected, non-bypassable kernel-resident software extensions for augmenting security without modi cation of COTS source. We describe the key elements of our work: our high-level Wrapper De nition Language (WDL), and our framework for con guring, activating, and managing wrappers. We also discuss code reuse, automatic management of extensions, a framework for system-building through composition, platform-independence, and our experiences with our Solaris and FreeBSD prototypes.",
"title": ""
},
{
"docid": "43fec39ff1d77fbe246e31d98f33f861",
"text": "OBJECTIVE\nTo investigate a modulation of the N170 face-sensitive component related to the perception of other-race (OR) and same-race (SR) faces, as well as differences in face and non-face object processing, by combining different methods of event-related potential (ERP) signal analysis.\n\n\nMETHODS\nSixty-two channel ERPs were recorded in 12 Caucasian subjects presented with Caucasian and Asian faces along with non-face objects. Surface data were submitted to classical waveforms and ERP map topography analysis. Underlying brain sources were estimated with two inverse solutions (BESA and LORETA).\n\n\nRESULTS\nThe N170 face component was identical for both race faces. This component and its topography revealed a face specific pattern regardless of race. However, in this time period OR faces evoked significantly stronger medial occipital activity than SR faces. Moreover, in terms of maps, at around 170 ms face-specific activity significantly preceded non-face object activity by 25 ms. These ERP maps were followed by similar activation patterns across conditions around 190-300 ms, most likely reflecting the activation of visually derived semantic information.\n\n\nCONCLUSIONS\nThe N170 was not sensitive to the race of the faces. However, a possible pre-attentive process associated to the relatively stronger unfamiliarity for OR faces was found in medial occipital area. Moreover, our data provide further information on the time-course of face and non-face object processing.",
"title": ""
},
{
"docid": "7c64c486b92623bd45f8c2ffd6a6c632",
"text": "Multi-task learning (MTL), which optimizes multiple related learning tasks at the same time, has been widely used in various applications, including natural language processing, speech recognition, computer vision, multimedia data processing, biomedical imaging, socio-biological data analysis, multi-modality data analysis, etc. MTL sometimes is also referred to as joint learning, and is closely related to other machine learning subfields like multi-class learning, transfer learning, and learning with auxiliary tasks, to name a few. In this paper, we provide a brief review on this topic, discuss the motivation behind this machine learning method, compare various MTL algorithms, review MTL methods for incomplete data, and discuss its application in deep learning. We aim to provide the readers with a simple way to understand MTL without too many complicated equations, and to help the readers to apply MTL in their applications.",
"title": ""
},
{
"docid": "d4da93e3d68bc9d97e478e62f0fe54b3",
"text": "A method to reduce RF leakage in split-block fabricated metallic ridged waveguide is presented. By placing a pin wall into the split-block seam, RF leakage and detrimental performance spikes are significantly reduced. Three different split-block double-ridge waveguide components of varying design complexity operating over 18–45-GHz bandwidth are fabricated and measured. It is shown that the addition of the pin wall improves the performance of each component such that results correlate well to simulated models with no split.",
"title": ""
},
{
"docid": "9a60a2103531b018bfdd5f1bf3fc52d0",
"text": "Translation of natural language text using statistical machine translation (SMT) is a supervised machine learning problem. SMT algorithms are trained to learn how to translate by providing many translations produced by human language experts. The field SMT has gained momentum in recent three decades. New techniques are constantly introduced by the researchers. This is survey paper presenting an introduction of the recent developments in the field. The paper also describes the recent research for word alignment and language modelling problems in the translation process. An overview of these two sub problems is enlisted. Along the way, some challenges in machine translation are presented.",
"title": ""
},
{
"docid": "a3ace9ac6ae3f3d2dd7e02bd158a5981",
"text": "The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations. Here, we present results comparing RankBoost to nearest-neighbor and regression algorithms. Thesis Supervisor: David R. Karger Title: Associate Professor",
"title": ""
},
{
"docid": "9a2755c84d82c410447842012d3a878d",
"text": "The potential impacts of genetically modified (GM) crops on income, poverty and nutrition in developing countries continue to be the subject of public controversy. Here, a review of the evidence is given. As an example of a first-generation GM technology, the effects of insect-resistant Bt cotton are analysed. Bt cotton has already been adopted by millions of small-scale farmers, in India, China, and South Africa among others. On average, farmers benefit from insecticide savings, higher effective yields and sizeable income gains. Insights from India suggest that Bt cotton is employment generating and poverty reducing. As an example of a second-generation technology, the likely impacts of beta-carotene-rich Golden Rice are analysed from an ex ante perspective. Vitamin A deficiency is a serious nutritional problem, causing multiple adverse health outcomes. Simulations for India show that Golden Rice could reduce related health problems significantly, preventing up to 40,000 child deaths every year. These examples clearly demonstrate that GM crops can contribute to poverty reduction and food security in developing countries. To realise such social benefits on a larger scale requires more public support for research targeted to the poor, as well as more efficient regulatory and technology delivery systems.",
"title": ""
},
{
"docid": "b1e2326ebdf729e5b55822a614b289a9",
"text": "The work presented in this paper is targeted at the first phase of the test and measurements product life cycle, namely standardisation. During this initial phase of any product, the emphasis is on the development of standards that support new technologies while leaving the scope of implementations as open as possible. To allow the engineer to freely create and invent tools that can quickly help him simulate or emulate his ideas are paramount. Within this scope, a traffic generation system has been developed for IEC 61850 Sampled Values which will help in the evaluation of the data models, data acquisition, data fusion, data integration and data distribution between the various devices and components that use this complex set of evolving standards in Smart Grid systems.",
"title": ""
},
{
"docid": "3cd7523afa1b648516b86c5221a630e7",
"text": "MOTIVATION\nAdvances in Next-Generation Sequencing technologies and sample preparation recently enabled generation of high-quality jumping libraries that have a potential to significantly improve short read assemblies. However, assembly algorithms have to catch up with experimental innovations to benefit from them and to produce high-quality assemblies.\n\n\nRESULTS\nWe present a new algorithm that extends recently described exSPAnder universal repeat resolution approach to enable its applications to several challenging data types, including jumping libraries generated by the recently developed Illumina Nextera Mate Pair protocol. We demonstrate that, with these improvements, bacterial genomes often can be assembled in a few contigs using only a single Nextera Mate Pair library of short reads.\n\n\nAVAILABILITY AND IMPLEMENTATION\nDescribed algorithms are implemented in C++ as a part of SPAdes genome assembler, which is freely available at bioinf.spbau.ru/en/spades.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "7547da5f5e33051dcbbb8a2d7abe46ce",
"text": "We introduce the joint time-frequency scattering transform, a time shift invariant descriptor of time-frequency structure for audio classification. It is obtained by applying a two-dimensional wavelet transform in time and log-frequency to a time-frequency wavelet scalogram. We show that this descriptor successfully characterizes complex time-frequency phenomena such as time-varying filters and frequency modulated excitations. State-of-the-art results are achieved for signal reconstruction and phone segment classification on the TIMIT dataset.",
"title": ""
},
{
"docid": "7753f01bccddd8c89290f9ede0f430c7",
"text": "Context Yoga combines exercise with achieving a state of mental focus through breathing. In the United States, 1 million people practice yoga for low back pain. Contribution The authors recruited patients who had a recent primary care visit for low back pain and randomly assigned 101 to yoga or conventional exercise or a self-care book. Patients in the yoga and exercise groups reported good adherence at 26 weeks. Compared with self-care, symptoms were milder and function was better with yoga. The exercise group had intermediate outcomes. Symptoms improved between 12 and 26 weeks only with yoga. Implications Yoga was a more effective treatment for low back pain than a self-care book. The Editors Most treatments for chronic low back pain have modest efficacy at best (1). Exercise is one of the few proven treatments for chronic low back pain; however, its effects are often small, and no form has been shown to be clearly better than another (2-5). Yoga, which often couples physical exercise with breathing, is a popular alternative form of mindbody therapy. An estimated 14 million Americans practiced yoga in 2002 (6), including more than 1 million who used it as a treatment for back pain (7, 8). Yoga may benefit patients with back pain simply because it involves exercise or because of its effects on mental focus. We found no published studies in western biomedical literature that evaluated yoga for chronic low back pain; therefore, we designed a clinical trial to evaluate its effectiveness and safety for this condition. Methods Study Design and Setting This randomized, controlled trial compared the effects of yoga classes with conventional exercise classes and with a self-care book in patients with low back pain that persisted for at least 12 weeks. The study was conducted at Group Health Cooperative, a nonprofit, integrated health care system with approximately 500000 enrollees in Washington State and Idaho. The Group Health Cooperative institutional review board approved the study protocol, and all study participants gave oral informed consent before the eligibility screening and written consent before the baseline interview and randomization. Patients Patients from Group Health Cooperative were recruited for 12-week sessions of classes that were conducted between June and December 2003. We mailed letters describing the study to 6913 patients between 20 and 64 years of age who had visited a primary care provider for treatment of back pain 3 to 15 months before the study (according to electronic visit records). We also advertised the study in the health plan's consumer magazine. Patients were informed that we were comparing 3 approaches for the relief of back pain and that each was designed to help reduce the negative effects of low back pain on people's lives. A research assistant telephoned patients who returned statements of interest to assess their eligibility. After we received their signed informed consent forms, eligible patients were telephoned again for collection of baseline data and randomization to treatment. We excluded individuals whose back pain was complicated (for example, sciatica, previous back surgery, or diagnosed spinal stenosis), potentially attributable to specific underlying diseases or conditions (for example, pregnancy, metastatic cancer, spondylolisthesis, fractured bones, or dislocated joints), or minimal (rating of less than 3 on a bothersomeness scale of 0 to 10). We also excluded individuals who were currently receiving other back pain treatments or had participated in yoga or exercise training for back pain in the past year, those with a possible disincentive to improve (such as patients receiving workers' compensation or those involved in litigation), and those with unstable medical or severe psychiatric conditions or dementia. Patients who had contraindications (for example, symptoms consistent with severe disk disease) or schedules that precluded class participation, those who were unwilling to practice at home, or those who could not speak or understand English were also excluded. Randomization Protocol Participants were randomly assigned to participate in yoga or exercise classes or to receive the self-care book. We randomly generated treatment assignments for each class series by using a computer program with block sizes of 6 or 9. A researcher who was not involved in patient recruitment or randomization placed the assignments in opaque, sequentially numbered envelopes, which were stored in a locked filing cabinet until needed for randomization. Interventions The yoga and exercise classes developed specifically for this study consisted of 12 weekly 75-minute classes designed to benefit people with chronic low back pain. In addition to attending classes held at Group Health facilities, participants were asked to practice daily at home. Participants received handouts that described home practices, and yoga participants received auditory compact discs to guide them through the sequence of postures with the appropriate mental focus (examples of postures are shown in the Appendix Figure). Study participants retained access to all medical care provided by their insurance plan. Appendix Figure. Yoga postures Yoga We chose to use viniyoga, a therapeutically oriented style of yoga that emphasizes safety and is relatively easy to learn. Our class instructor and a senior teacher of viniyoga, who has +written a book about its therapeutic uses (9), designed the yoga intervention for patients with back pain who did not have previous yoga experience. Although all the sessions emphasized use of postures and breathing for managing low back symptoms, each had a specific focus: relaxation; strength-building, flexibility, and large-muscle movement; asymmetric poses; strengthening the hip muscles; lateral bending; integration; and customizing a personal practice. The postures were selected from a core of 17 relatively simple postures, some with adaptations (Appendix Table), and the sequence of the postures in each class was performed according to the rudiments of viniyoga (9). Each class included a question-and-answer period, an initial and final breathing exercise, 5 to 12 postures, and a guided deep relaxation. Most postures were not held but were repeated 3 or 6 times. Exercise Because we could not identify a clearly superior form of therapeutic exercise for low back pain from the literature, a physical therapist designed a 12-session class series that was 1) different from what most participants would have probably experienced in previous physical therapy sessions (to maximize adherence) and 2) similar to the yoga classes in number and length. We included a short educational talk that provided information on proper body mechanics, the benefits of exercise and realistic goal setting, and overcoming common barriers to developing an exercise routine (for example, fear). Each session began with the educational talk; feedback from the previous week; simple warm-ups to increase heart rate; and repetitions of a series of 7 aerobic exercises and 10 strengthening exercises that emphasized leg, hip, abdominal, and back muscles. Over the course of the 12-week series, the number of repetitions of each aerobic and strength exercise increased from 8 to 30 in increments of 2. The strengthening exercises were followed by 12 stretches for the same muscle groups; each stretch was held for 30 seconds. Classes ended with a short, unguided period of deep, slow breathing. Self-Care Book Participants were mailed a copy of The Back Pain Helpbook (10), an evidence-based book that emphasized such self-care strategies as adoption of a comprehensive fitness and strength program, appropriate lifestyle modification, and guidelines for managing flare-ups. Although we did not provide any instructions for using the book, many of the chapters concluded with specific action items. Outcome Measures Interviewers who were masked to the treatment assignments conducted telephone interviews at baseline and at 6, 12, and 26 weeks after randomization. The baseline interview collected information regarding sociodemographic characteristics, back pain history, and the participant's level of knowledge about yoga and exercise. Participants were asked to describe their current pain and to rate their expectations for each intervention. The primary outcomes were back-related dysfunction and symptoms, and the primary time point of interest was 12 weeks. We used the modified Roland Disability Scale (11) to measure patient dysfunction by totaling the number of positive responses to 23 questions about limitations of daily activities that might arise from back pain. This scale has been found to be valid, reliable, and sensitive to change (12-14); researchers estimate that the minimum clinically significant difference on the Roland scale ranges from 2 to 3 points (13, 15). Participants rated how bothersome their back pain had been during the previous week on an 11-point scale, in which 0 represented not at all bothersome and 10 represented extremely bothersome; a similar measure demonstrated substantial construct validity in earlier research (13). Estimates of the minimum clinically significant difference on the bothersomeness scale were approximately 1.5 points (16, 17). Secondary outcome measures were general health status, which we assessed by conducting the Short Form-36 Health Survey (18); degree of restricted activity as determined by patient responses to 3 questions (19); and medication use. After all outcomes data were collected, we asked questions related to specific interventions (for example, Did you practice at home?). At the 12-week interview, we asked class participants about any pain or substantial discomfort they experienced as a result of the classes. We assessed adherence to the home practice recommendations by asking class participants to complete weekly home practice logs and by asking about home practice during the follow-up i",
"title": ""
},
{
"docid": "b712552d760c887131f012e808dca253",
"text": "To the same utterance, people’s responses in everyday dialogue may be diverse largely in terms of content semantics, speaking styles, communication intentions and so on. Previous generative conversational models ignore these 1-to-n relationships between a post to its diverse responses, and tend to return high-frequency but meaningless responses. In this study we propose a mechanism-aware neural machine for dialogue response generation. It assumes that there exists some latent responding mechanisms, each of which can generate different responses for a single input post. With this assumption we model different responding mechanisms as latent embeddings, and develop a encoder-diverter-decoder framework to train its modules in an end-to-end fashion. With the learned latent mechanisms, for the first time these decomposed modules can be used to encode the input into mechanism-aware context, and decode the responses with the controlled generation styles and topics. Finally, the experiments with human judgements, intuitive examples, detailed discussions demonstrate the quality and diversity of the generated responses with 9.80% increase of acceptable ratio over the best of six baseline methods.",
"title": ""
},
{
"docid": "75f5679d9c1bab3585c1bf28d50327d8",
"text": "From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.",
"title": ""
},
{
"docid": "e757926fbaec4097530b9a00c1278b1c",
"text": "Many fish populations have both resident and migratory individuals. Migrants usually grow larger and have higher reproductive potential but lower survival than resident conspecifics. The ‘decision’ about migration versus residence probably depends on the individual growth rate, or a physiological process like metabolic rate which is correlated with growth rate. Fish usually mature as their somatic growth levels off, where energetic costs of maintenance approach energetic intake. After maturation, growth also stagnates because of resource allocation to reproduction. Instead of maturation, however, fish may move to an alternative feeding habitat and their fitness may thereby be increased. When doing so, maturity is usually delayed, either to the new asymptotic length, or sooner, if that gives higher expected fitness. Females often dominate among migrants and males among residents. The reason is probably that females maximize their fitness by growing larger, because their reproductive success generally increases exponentially with body size. Males, on the other hand, may maximize fitness by alternative life histories, e.g. fighting versus sneaking, as in many salmonid species where small residents are the sneakers and large migrants the fighters. Partial migration appears to be partly developmental, depending on environmental conditions, and partly genetic, inherited as a quantitative trait influenced by a number of genes.",
"title": ""
}
] |
scidocsrr
|
86fd6d14bd128affddf6af16f906ac06
|
MaMaDroid: Detecting Android Malware by Building Markov Chains of Behavioral Models
|
[
{
"docid": "e8b5fcac441c46e46b67ffbdd4b043e6",
"text": "We present DroidSafe, a static information flow analysis tool that reports potential leaks of sensitive information in Android applications. DroidSafe combines a comprehensive, accurate, and precise model of the Android runtime with static analysis design decisions that enable the DroidSafe analyses to scale to analyze this model. This combination is enabled by accurate analysis stubs, a technique that enables the effective analysis of code whose complete semantics lies outside the scope of Java, and by a combination of analyses that together can statically resolve communication targets identified by dynamically constructed values such as strings and class designators. Our experimental results demonstrate that 1) DroidSafe achieves unprecedented precision and accuracy for Android information flow analysis (as measured on a standard previously published set of benchmark applications) and 2) DroidSafe detects all malicious information flow leaks inserted into 24 real-world Android applications by three independent, hostile Red-Team organizations. The previous state-of-the art analysis, in contrast, detects less than 10% of these malicious flows.",
"title": ""
},
{
"docid": "4a85e3b10ecc4c190c45d0dfafafb388",
"text": "The number of malicious applications targeting the Android system has literally exploded in recent years. While the security community, well aware of this fact, has proposed several methods for detection of Android malware, most of these are based on permission and API usage or the identification of expert features. Unfortunately, many of these approaches are susceptible to instruction level obfuscation techniques. Previous research on classic desktop malware has shown that some high level characteristics of the code, such as function call graphs, can be used to find similarities between samples while being more robust against certain obfuscation strategies. However, the identification of similarities in graphs is a non-trivial problem whose complexity hinders the use of these features for malware detection. In this paper, we explore how recent developments in machine learning classification of graphs can be efficiently applied to this problem. We propose a method for malware detection based on efficient embeddings of function call graphs with an explicit feature map inspired by a linear-time graph kernel. In an evaluation with 12,158 malware samples our method, purely based on structural features, outperforms several related approaches and detects 89% of the malware with few false alarms, while also allowing to pin-point malicious code structures within Android applications.",
"title": ""
}
] |
[
{
"docid": "81476f837dd763301ba065ac78c5bb65",
"text": "Background: The ideal lip augmentation technique provides the longest period of efficacy, lowest complication rate, and best aesthetic results. A myriad of techniques have been described for lip augmentation, but the optimal approach has not yet been established. This systematic review with metaregression will focus on the various filling procedures for lip augmentation (FPLA), with the goal of determining the optimal approach. Methods: A systematic search for all English, French, Spanish, German, Italian, Portuguese and Dutch language studies involving FPLA was performed using these databases: Elsevier Science Direct, PubMed, Highwire Press, Springer Standard Collection, SAGE, DOAJ, Sweetswise, Free E-Journals, Ovid Lippincott Williams & Wilkins, Willey Online Library Journals, and Cochrane Plus. The reference section of every study selected through this database search was subsequently examined to identify additional relevant studies. Results: The database search yielded 29 studies. Nine more studies were retrieved from the reference sections of these 29 studies. The level of evidence ratings of these 38 studies were as follows: level Ib, four studies; level IIb, four studies; level IIIb, one study; and level IV, 29 studies. Ten studies were prospective. Conclusions: This systematic review sought to highlight all the quality data currently available regarding FPLA. Because of the considerable diversity of procedures, no definitive comparisons or conclusions were possible. Additional prospective studies and clinical trials are required to more conclusively determine the most appropriate approach for this procedure. Level of evidence: IV. © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ee223b75a3a99f15941e4725d261355e",
"text": "BACKGROUND\nIn Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups.\n\n\nOBJECTIVE\nThe objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels.\n\n\nDESIGN\nWe estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition.\n\n\nRESULTS\nAt the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected.\n\n\nCONCLUSIONS\nAlthough some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions.",
"title": ""
},
{
"docid": "f0432af5265a08ccde0111d2d05b93e2",
"text": "Cyber security is a critical issue now a days in various different domains in different disciplines. This paper presents a review analysis of cyber hacking attacks along with its experimental results and proposes a new methodology 3SEMCS named as three step encryption method for cyber security. By utilizing this new designed methodology, security at highest level will be easily provided especially on the time of request submission in the search engine as like google during client server communication. During its working a group of separate encryption algorithms are used. The benefit to utilize this three step encryption is to provide more tighten security by applying three separate encryption algorithms in each phase having different operations. And the additional benefit to utilize this methodology is to run over new designed private browser named as “RR” that is termed as Rim Rocks correspondingly this also help to check the authenticated sites or phishing sites by utilizing the strategy of passing URL address from phishing tank. This may help to block the phisher sites and user will relocate on previous page. The purpose to design this personnel browser is to enhance the level of security by",
"title": ""
},
{
"docid": "93d4f159eb718b6e8d2b5cb252f7bb6c",
"text": "We present RRT, the first asymptotically optimal samplingbased motion planning algorithm for real-time navigation in dynamic environments (containing obstacles that unpredictably appear, disappear, and move). Whenever obstacle changes are observed, e.g., by onboard sensors, a graph rewiring cascade quickly updates the search-graph and repairs its shortest-path-to-goal subtree. Both graph and tree are built directly in the robot’s state space, respect the kinematics of the robot, and continue to improve during navigation. RRT is also competitive in static environments—where it has the same amortized per iteration runtime as RRT and RRT* Θ (logn) and is faster than RRT ω ( log n ) . In order to achieve O (logn) iteration time, each node maintains a set of O (logn) expected neighbors, and the search graph maintains -consistency for a predefined .",
"title": ""
},
{
"docid": "d5d2b61493ed11ee74d566b7713b57ba",
"text": "BACKGROUND\nSymptomatic breakthrough in proton pump inhibitor (PPI)-treated gastro-oesophageal reflux disease (GERD) patients is a common problem with a range of underlying causes. The nonsystemic, raft-forming action of alginates may help resolve symptoms.\n\n\nAIM\nTo assess alginate-antacid (Gaviscon Double Action, RB, Slough, UK) as add-on therapy to once-daily PPI for suppression of breakthrough reflux symptoms.\n\n\nMETHODS\nIn two randomised, double-blind studies (exploratory, n=52; confirmatory, n=262), patients taking standard-dose PPI who had breakthrough symptoms, assessed by Heartburn Reflux Dyspepsia Questionnaire (HRDQ), were randomised to add-on Gaviscon or placebo (20 mL after meals and bedtime). The exploratory study endpoint was change in HRDQ score during treatment vs run-in. The confirmatory study endpoint was \"response\" defined as ≥3 days reduction in the number of \"bad\" days (HRDQ [heartburn/regurgitation] >0.70) during treatment vs run-in.\n\n\nRESULTS\nIn the exploratory study, significantly greater reductions in HRDQ scores (heartburn/regurgitation) were observed in the Gaviscon vs placebo (least squares mean difference [95% CI] -2.10 [-3.71 to -0.48]; P=.012). Post hoc \"responder\" analysis of the exploratory study also revealed significantly more Gaviscon patients (75%) achieved ≥3 days reduction in \"bad\" days vs placebo patients (36%), P=.005. In the confirmatory study, symptomatic improvement was observed with add-on Gaviscon (51%) but there was no significant difference in response vs placebo (48%) (OR (95% CI) 1.15 (0.69-1.91), P=.5939).\n\n\nCONCLUSIONS\nAdding Gaviscon to PPI reduced breakthrough GERD symptoms but a nearly equal response was observed for placebo. Response to intervention may vary according to whether symptoms are functional in origin.",
"title": ""
},
{
"docid": "d18ed4c40450454d6f517c808da7115a",
"text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.",
"title": ""
},
{
"docid": "34c343413fc748c1fc5e07fb40e3e97d",
"text": "We study online social networks in which relationships can be either positive (indicating relations such as friendship) or negative (indicating relations such as opposition or antagonism). Such a mix of positive and negative links arise in a variety of online settings; we study datasets from Epinions, Slashdot and Wikipedia. We find that the signs of links in the underlying social networks can be predicted with high accuracy, using models that generalize across this diverse range of sites. These models provide insight into some of the fundamental principles that drive the formation of signed links in networks, shedding light on theories of balance and status from social psychology; they also suggest social computing applications by which the attitude of one user toward another can be estimated from evidence provided by their relationships with other members of the surrounding social network.",
"title": ""
},
{
"docid": "f04a17b6e996be1828d666f70b055c46",
"text": "Machine learning methods are becoming integral to scientific inquiry in numerous disciplines. We demonstrated that machine learning can be used to predict the performance of a synthetic reaction in multidimensional chemical space using data obtained via high-throughput experimentation. We created scripts to compute and extract atomic, molecular, and vibrational descriptors for the components of a palladium-catalyzed Buchwald-Hartwig cross-coupling of aryl halides with 4-methylaniline in the presence of various potentially inhibitory additives. Using these descriptors as inputs and reaction yield as output, we showed that a random forest algorithm provides significantly improved predictive performance over linear regression analysis. The random forest model was also successfully applied to sparse training sets and out-of-sample prediction, suggesting its value in facilitating adoption of synthetic methodology.",
"title": ""
},
{
"docid": "582fc5f68422cf5ac35c526a905d6f42",
"text": "In this paper I present a review of the different forms of network security in place in the world today. It is a tutorial type of paper and it especially deals with cryptographic algorithms, security protocols, authentication issues, end to end security solutions with a host of other network security issues. I compile these into a general purview of this topic and then I go in detail regarding the issues involved. I first focus on the state of Network security in the world today after explaining the need for network security. After highlighting these, I will be looking into the different types of Network security used. This part is quite an extensive coverage into the various forms of Network security. Then, I will be highlighting the problems still facing computer networks followed by the latest research done in the areas of Computer and Network Security.",
"title": ""
},
{
"docid": "205880d3205cb0f4844c20dcf51c4890",
"text": "Recently, deep networks were proved to be more effective than shallow architectures to face complex real–world applications. However, theoretical results supporting this claim are still few and incomplete. In this paper, we propose a new topological measure to study how the depth of feedforward networks impacts on their ability of implementing high complexity functions. Upper and lower bounds on network complexity are established, based on the number of hidden units and on their activation functions, showing that deep architectures are able, with the same number of resources, to address more difficult classification problems.",
"title": ""
},
{
"docid": "81d82cd481ee3719c74d381205a4a8bb",
"text": "Consider a set of <italic>S</italic> of <italic>n</italic> data points in real <italic>d</italic>-dimensional space, R<supscrpt>d</supscrpt>, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess <italic>S</italic> into a data structure, so that given any query point <italic>q</italic><inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, is the closest point of S to <italic>q</italic> can be reported quickly. Given any positive real ε, data point <italic>p</italic> is a (1 +ε)-<italic>approximate nearest neighbor</italic> of <italic>q</italic> if its distance from <italic>q</italic> is within a factor of (1 + ε) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of <italic>n</italic> points in R<supscrpt>d</supscrpt> in <italic>O(dn</italic> log <italic>n</italic>) time and <italic>O(dn)</italic> space, so that given a query point <italic> q</italic> <inline-equation> <f>∈</f></inline-equation> R<supscrpt>d</supscrpt>, and ε > 0, a (1 + ε)-approximate nearest neighbor of <italic>q</italic> can be computed in <italic>O</italic>(<italic>c</italic><subscrpt><italic>d</italic>, ε</subscrpt> log <italic>n</italic>) time, where <italic>c<subscrpt>d,ε</subscrpt></italic>≤<italic>d</italic> <inline-equation> <f><fen lp=\"ceil\">1 + 6d/<g>e</g><rp post=\"ceil\"></fen></f></inline-equation>;<supscrpt>d</supscrpt> is a factor depending only on dimension and ε. In general, we show that given an integer <italic>k</italic> ≥ 1, (1 + ε)-approximations to the <italic>k</italic> nearest neighbors of <italic>q</italic> can be computed in additional <italic>O(kd</italic> log <italic>n</italic>) time.",
"title": ""
},
{
"docid": "044a73d9db2f61dc9b4f9de0bdaa1b3f",
"text": "Traditionally employed human-to-human and human-to-machine communication has recently been replaced by a new trend known as the Internet of things (IoT). IoT enables device-to-device communication without any human intervention, hence, offers many challenges. In this paradigm, machine’s self-sustainability due to limited energy capabilities presents a great challenge. Therefore, this paper proposed a low-cost energy harvesting device using rectenna to mitigate the problem in the areas where battery constraint issues arise. So, an energy harvester is designed, optimized, fabricated, and characterized for energy harvesting and IoT applications which simply recycles radio-frequency (RF) energy at 2.4 GHz, from nearby Wi-Fi/WLAN devices and converts them to useful dc power. The physical model comprises of antenna, filters, rectifier, and so on. A rectangular patch antenna is designed and optimized to resonate at 2.4 GHz using the well-known transmission-line model while the band-pass and low-pass filters are designed using lumped components. Schottky diode (HSMS-2820) is used for rectification. The circuit is designed and fabricated using the low-cost FR4 substrate (<inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula> = 16 mm and <inline-formula> <tex-math notation=\"LaTeX\">$\\varepsilon _{r} = 4.6$ </tex-math></inline-formula>) having the fabricated dimensions of 285 mm <inline-formula> <tex-math notation=\"LaTeX\">$\\times \\,\\,90$ </tex-math></inline-formula> mm. Universal software radio peripheral and GNU Radio are employed to measure the received RF power, while similar measurements are carried out using R&S spectrum analyzer for validation. The received measured power is −64.4 dBm at the output port of the rectenna circuit. Hence, our design enables a pervasive deployment of self-operable next-generation IoT devices.",
"title": ""
},
{
"docid": "f91479717316e55152b98eec80472b12",
"text": "Language in social media is a dynamic system, constantly evolving and adapting, with words and concepts rapidly emerging, disappearing, and changing their meaning. These changes can be estimated using word representations in context, over time and across locations. A number of methods have been proposed to track these spatiotemporal changes but no general method exists to evaluate the quality of these representations. Previous work largely focused on qualitative evaluation, which we improve by proposing a set of visualizations that highlight changes in text representation over both space and time. We demonstrate usefulness of novel spatiotemporal representations to explore and characterize specific aspects of the corpus of tweets collected from European countries over a two-week period centered around the terrorist attacks in Brussels in March 2016. In addition, we quantitatively evaluate spatiotemporal representations by feeding them into a downstream classification task – event type prediction. Thus, our work is the first to provide both intrinsic (qualitative) and extrinsic (quantitative) evaluation of text representations for spatiotemporal trends.",
"title": ""
},
{
"docid": "99463a031385cbc677e441b8aee87998",
"text": "Having a parent with a mental illness can create considerable risks in the mental health and wellbeing of children. While intervention programs have been used effectively to reduce children’s psychopathology, particularly those whose parents have a specific diagnosis, little is known about the effectiveness of these early interventions for the wellbeing of children of parents who have a mental illness from a broad range of parents. Here we report on an evaluation of CHAMPS (Children And Mentally ill ParentS), a pilot intervention program offered in two formats (school holiday and after school peer support programs) to children aged 8-12 whose parents have a mental illness. The wellbeing of 69 children was evaluated at the beginning of the programs and four weeks after program completion, on instruments examining self-esteem, coping skills, connections (total, within and outside the family) and relationship problems (total, within and outside the family). Post intervention, there were significant improvements in self-esteem, coping and connections within the family, and reductions in relationship problems. The impact on children’s wellbeing differed according to the intensity of the program (consecutive days or weekly program). The results are discussed in the context of providing interventions for children whose parents have a mental illness and the implications for service provision generally.",
"title": ""
},
{
"docid": "3cf9d0c8f74248f2b150682f3b5127eb",
"text": "Signal Temporal Logic (STL) is a formalism used to rigorously specify requirements of cyberphysical systems (CPS), i.e., systems mixing digital or discrete components in interaction with a continuous environment or analog components. STL is naturally equipped with a quantitative semantics which can be used for various purposes: from assessing the robustness of a specification to guiding searches over the input and parameter space with the goal of falsifying the given property over system behaviors. Algorithms have been proposed and implemented for offline computation of such quantitative semantics, but only few methods exist for an online setting, where one would want to monitor the satisfaction of a formula during simulation. In this paper, we formalize a semantics for robust online monitoring of partial traces, i.e., traces for which there might not be enough data to decide the Boolean satisfaction (and to compute its quantitative counterpart). We propose an efficient algorithm to compute it and demonstrate its usage on two large scale real-world case studies coming from the automotive domain and from CPS education in a Massively Open Online Course (MOOC) setting. We show that savings in computationally expensive simulations far outweigh any overheads incurred by an online approach.",
"title": ""
},
{
"docid": "b0c5c8e88e9988b6548acb1c8ebb5edd",
"text": "We present a bottom-up aggregation approach to image segmentation. Beginning with an image, we execute a sequence of steps in which pixels are gradually merged to produce larger and larger regions. In each step, we consider pairs of adjacent regions and provide a probability measure to assess whether or not they should be included in the same segment. Our probabilistic formulation takes into account intensity and texture distributions in a local area around each region. It further incorporates priors based on the geometry of the regions. Finally, posteriors based on intensity and texture cues are combined using “ a mixture of experts” formulation. This probabilistic approach is integrated into a graph coarsening scheme, providing a complete hierarchical segmentation of the image. The algorithm complexity is linear in the number of the image pixels and it requires almost no user-tuned parameters. In addition, we provide a novel evaluation scheme for image segmentation algorithms, attempting to avoid human semantic considerations that are out of scope for segmentation algorithms. Using this novel evaluation scheme, we test our method and provide a comparison to several existing segmentation algorithms.",
"title": ""
},
{
"docid": "cabf420400bc46a00ee062c5d6a850a7",
"text": "In the last years, automotive systems evolved to be more and more software-intensive systems. As a result, consider able attention has been paid to establish an efficient softwa re development process of such systems, where reliability is an important criterion. Hence, model-driven development (MDD), software engineering and requirements engineering (amongst others) found their way into the systems engineering domain. However, one important aspect regarding the reliability of such systems, has been largely neglected on a holistic level: the IT security. In this paper, we introduce a potential approach for integrating IT security in the requirements engineering process of automotive software development using function net modeling.",
"title": ""
},
{
"docid": "e5e817d6cadc18d280d912fea42cdd9a",
"text": "Recent discoveries of geographical patterns in microbial distribution are undermining microbiology's exclusively ecological explanations of biogeography and their fundamental assumption that 'everything is everywhere: but the environment selects'. This statement was generally promulgated by Dutch microbiologist Martinus Wilhelm Beijerinck early in the twentieth century and specifically articulated in 1934 by his compatriot, Lourens G. M. Baas Becking. The persistence of this precept throughout twentieth-century microbiology raises a number of issues in relation to its formulation and widespread acceptance. This paper will trace the conceptual history of Beijerinck's claim that 'everything is everywhere' in relation to a more general account of its theoretical, experimental and institutional context. His principle also needs to be situated in relationship to plant and animal biogeography, which, this paper will argue, forms a continuum of thought with microbial biogeography. Finally, a brief overview of the contemporary microbiological research challenging 'everything is everywhere' reveals that philosophical issues from Beijerinck's era of microbiology still provoke intense discussion in twenty-first century investigations of microbial biogeography.",
"title": ""
},
{
"docid": "857d8003dff05b8e1ba5eeb8f6b3c14e",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] |
scidocsrr
|
aa2a216e2ccc2390b042fbb4895a5645
|
Static Analysis of Android Programs
|
[
{
"docid": "a8cb644c1a7670670299d33c1e1e53d3",
"text": "In Java, C or C++, attempts to dereference the null value result in an exception or a segmentation fault. Hence, it is important to identify those program points where this undesired behaviour might occur or prove the other program points (and possibly the entire program) safe. To that purpose, null-pointer analysis of computer programs checks or infers non-null annotations for variables and object fields. With few notable exceptions, null-pointer analyses currently use run-time checks or are incorrect or only verify manually provided annotations. In this paper, we use abstract interpretation to build and prove correct a first, flow and context-sensitive static null-pointer analysis for Java bytecode (and hence Java) which infers non-null annotations. It is based on Boolean formulas, implemented with binary decision diagrams. For better precision, it identifies instance or static fields that remain always non-null after being initialised. Our experiments show this analysis faster and more precise than the correct null-pointer analysis by Hubert, Jensen and Pichardie. Moreover, our analysis deals with exceptions, which is not the case of most others; its formulation is theoretically clean and its implementation strong and scalable. We subsequently improve that analysis by using local reasoning about fields that are not always non-null, but happen to hold a non-null value when they are accessed. This is a frequent situation, since programmers typically check a field for non-nullness before its access. We conclude with an example of use of our analyses to infer null-pointer annotations which are more precise than those that other inference tools can achieve.",
"title": ""
}
] |
[
{
"docid": "6c12755ba2580d5d9b794b9a33c0304a",
"text": "A fundamental part of conducting cross-disciplinary web science research is having useful, high-quality datasets that provide value to studies across disciplines. In this paper, we introduce a large, hand-coded corpus of online harassment data. A team of researchers collaboratively developed a codebook using grounded theory and labeled 35,000 tweets. Our resulting dataset has roughly 15% positive harassment examples and 85% negative examples. This data is useful for training machine learning models, identifying textual and linguistic features of online harassment, and for studying the nature of harassing comments and the culture of trolling.",
"title": ""
},
{
"docid": "6677149025a415e44778d1011b617c36",
"text": "In this paper controller synthesis based on standard and dynamic sliding modes for an uncertain nonlinear MIMO Three tank System is presented. Two types of sliding mode controllers are synthesized; first controller is based on standard first order sliding modes while second controller uses dynamic sliding modes. Sliding manifolds for both controllers are designed in-order to ensure finite time convergence of sliding variable for tracking the desired system trajectories. Simulation results are presented showing the performance analysis of both sliding mode controllers. Simulations are also carried out to assess the performance of dynamic sliding mode controller against parametric uncertainties / disturbances. A comparison of designed sliding mode controllers with LMI based robust H∞ controller is also discussed. The performance of dynamic sliding mode control in terms of response time, control effort and robustness of dynamic sliding mode controller is shown to be better than standard sliding mode controller and H∞ controllers.",
"title": ""
},
{
"docid": "b18ecc94c1f42567b181c49090b03d8a",
"text": "We propose a novel approach for inferring the individualized causal effects of a treatment (intervention) from observational data. Our approach conceptualizes causal inference as a multitask learning problem; we model a subject’s potential outcomes using a deep multitask network with a set of shared layers among the factual and counterfactual outcomes, and a set of outcome-specific layers. The impact of selection bias in the observational data is alleviated via a propensity-dropout regularization scheme, in which the network is thinned for every training example via a dropout probability that depends on the associated propensity score. The network is trained in alternating phases, where in each phase we use the training examples of one of the two potential outcomes (treated and control populations) to update the weights of the shared layers and the respective outcome-specific layers. Experiments conducted on data based on a real-world observational study show that our algorithm outperforms the state-of-the-art.",
"title": ""
},
{
"docid": "ced0fc1355a25aba36288d7c0a830240",
"text": "Working memory acts as a key bridge between perception, long-term memory, and action. The brain regions, connections, and neurotransmitters that underlie working memory undergo dramatic plastic changes during the life span, and in response to injury. Early life reliance on deep gray matter structures fades during adolescence as increasing reliance on prefrontal and parietal cortex accompanies the development of executive aspects of working memory. The rise and fall of working memory capacity and executive functions parallels the development and loss of neurotransmitter function in frontal cortical areas. Of the affected neurotransmitters, dopamine and acetylcholine modulate excitatory-inhibitory circuits that underlie working memory, are important for plasticity in the system, and are affected following preterm birth and adult brain injury. Pharmacological interventions to promote recovery of working memory abilities have had limited success, but hold promise if used in combination with behavioral training and brain stimulation. The intense study of working memory in a range of species, ages and following injuries has led to better understanding of the intrinsic plasticity mechanisms in the working memory system. The challenge now is to guide these mechanisms to better improve or restore working memory function.",
"title": ""
},
{
"docid": "3e44a5c966afbeabff11b54bafcefdce",
"text": "In this paper, we aim to compare empirically four initialization methods for the K-Means algorithm: random, Forgy, MacQueen and Kaufman. Although this algorithm is known for its robustness, it is widely reported in literature that its performance depends upon two key points: initial clustering and instance order. We conduct a series of experiments to draw up (in terms of mean, maximum, minimum and standard deviation) the probability distribution of the square-error values of the nal clusters returned by the K-Means algorithm independently on any initial clustering and on any instance order when each of the four initialization methods is used. The results of our experiments illustrate that the random and the Kauf-man initialization methods outperform the rest of the compared methods as they make the K-Means more eeective and more independent on initial clustering and on instance order. In addition, we compare the convergence speed of the K-Means algorithm when using each of the four initialization methods. Our results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initial-ization method.",
"title": ""
},
{
"docid": "f5f70dca677752bcaa39db59988c088e",
"text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children",
"title": ""
},
{
"docid": "b74818aca22974927fdcdcbf60ce239b",
"text": "We are currently observing a significant increase in the popularity of Unmanned Aerial Vehicles (UAVs), popularly also known by their generic term drones. This is not only the case for recreational UAVs, that one can acquire for a few hundred dollars, but also for more sophisticated ones, namely professional UAVs, whereby the cost can reach several thousands of dollars. These professional UAVs are known to be largely employed in sensitive missions such as monitoring of critical infrastructures and operations by the police force. Given these applications, and in contrast to what we have been seeing for the case of recreational UAVs, one might assume that professional UAVs are strongly resilient to security threats. In this demo we prove such an assumption wrong by presenting the security gaps of a professional UAV, which is used for critical operations by police forces around the world. We demonstrate how one can exploit the identified security vulnerabilities, perform a Man-in-the-Middle attack, and inject control commands to interact with the compromised UAV. In addition, we discuss appropriate countermeasures to help improving the security and resilience of professional UAVs.",
"title": ""
},
{
"docid": "a39f11e64ba8347b212b7e34fa434f32",
"text": "This paper proposes a fully distributed multiagent-based reinforcement learning method for optimal reactive power dispatch. According to the method, two agents communicate with each other only if their corresponding buses are electrically coupled. The global rewards that are required for learning are obtained with a consensus-based global information discovery algorithm, which has been demonstrated to be efficient and reliable. Based on the discovered global rewards, a distributed Q-learning algorithm is implemented to minimize the active power loss while satisfying operational constraints. The proposed method does not require accurate system model and can learn from scratch. Simulation studies with power systems of different sizes show that the method is very computationally efficient and able to provide near-optimal solutions. It can be observed that prior knowledge can significantly speed up the learning process and decrease the occurrences of undesirable disturbances. The proposed method has good potential for online implementation.",
"title": ""
},
{
"docid": "e9aea5919d3d38184fc13c10f1751293",
"text": "The distinct protein aggregates that are found in Alzheimer's, Parkinson's, Huntington's and prion diseases seem to cause these disorders. Small intermediates — soluble oligomers — in the aggregation process can confer synaptic dysfunction, whereas large, insoluble deposits might function as reservoirs of the bioactive oligomers. These emerging concepts are exemplified by Alzheimer's disease, in which amyloid β-protein oligomers adversely affect synaptic structure and plasticity. Findings in other neurodegenerative diseases indicate that a broadly similar process of neuronal dysfunction is induced by diffusible oligomers of misfolded proteins.",
"title": ""
},
{
"docid": "a98fbce4061085dda4d1cf4648d04f08",
"text": "We estimate mate preferences using a novel data set from an online dating service. The data set contains detailed information on user attributes and the decision to contact a potential mate after viewing his or her profile. This decision provides the basis for our preference estimation approach. A potential problem arises if the site users strategically shade their true preferences. We provide a simple test and a bias correction method for strategic behavior. The main findings are (i) There is no evidence for strategic behavior. (ii) Men and women have a strong preference for similarity along many (but not all) attributes. (iii) In particular, the site users display strong same-race preferences. Race preferences do not differ across users with different age, income, or education levels in the case of women, and differ only slightly in the case of men. For men, but not for women, the revealed same-race preferences correspond to the same-race preference stated in the users’ profile. (iv) There are gender differences in mate preferences; in particular, women have a stronger preference than men for income over physical attributes. ∗Note that previous versions of this paper (“What Makes You Click? – Mate Preferences and Matching Outcomes in Online Dating”) were circulated between 2004 and 2006. Any previously reported results not contained in this paper or in the companion piece Hitsch et al. (2010) did not prove to be robust and were dropped from the final paper versions. We thank Babur De los Santos, Chris Olivola, Tim Miller, and David Wood for their excellent research assistance. We are grateful to Elizabeth Bruch, Jean-Pierre Dubé, Eli Finkel, Emir Kamenica, Derek Neal, Peter Rossi, Betsey Stevenson, and Utku Ünver for comments and suggestions. Seminar participants at the 2006 AEA meetings, Boston College, the Caltech 2008 Matching Conference, the Choice Symposium in Estes Park, the Conference on Marriage and Matching at New York University 2006, the ELSE Laboratory Experiments and the Field (LEaF) Conference, Northwestern University, the 2007 SESP Preconference in Chicago, SITE 2007, the University of Pennsylvania, the 2004 QME Conference, UC Berkeley, UCLA, the University of Chicago, UCL, the University of Naples Federico II, the University of Toronto, Stanford GSB, and Yale University provided valuable comments. This research was supported by the Kilts Center of Marketing (Hitsch), a John M. Olin Junior Faculty Fellowship, and the National Science Foundation, SES-0449625 (Hortaçsu). Please address all correspondence to Hitsch ([email protected]), Hortaçsu ([email protected]), or Ariely ([email protected]).",
"title": ""
},
{
"docid": "af928cd35b6b33ce1cddbf566f63e607",
"text": "Machine Learning has been the quintessential solution for many AI problems, but learning is still heavily dependent on the specific training data. Some learning models can be incorporated with a prior knowledge in the Bayesian set up, but these learning models do not have the ability to access any organised world knowledge on demand. In this work, we propose to enhance learning models with world knowledge in the form of Knowledge Graph (KG) fact triples for Natural Language Processing (NLP) tasks. Our aim is to develop a deep learning model that can extract relevant prior support facts from knowledge graphs depending on the task using attention mechanism. We introduce a convolution-based model for learning representations of knowledge graph entity and relation clusters in order to reduce the attention space. We show that the proposed method is highly scalable to the amount of prior information that has to be processed and can be applied to any generic NLP task. Using this method we show significant improvement in performance for text classification with News20, DBPedia datasets and natural language inference with Stanford Natural Language Inference (SNLI) dataset. We also demonstrate that a deep learning model can be trained well with substantially less amount of labeled training data, when it has access to organised world knowledge in the form of knowledge graph.",
"title": ""
},
{
"docid": "da9b9a32db674e5f6366f6b9e2c4ee10",
"text": "We introduce a data-driven approach to aid the repairing and conservation of archaeological objects: ORGAN, an object reconstruction generative adversarial network (GAN). By using an encoder-decoder 3D deep neural network on a GAN architecture, and combining two loss objectives: a completion loss and an Improved Wasserstein GAN loss, we can train a network to effectively predict the missing geometry of damaged objects. As archaeological objects can greatly differ between them, the network is conditioned on a variable, which can be a culture, a region or any metadata of the object. In our results, we show that our method can recover most of the information from damaged objects, even in cases where more than half of the voxels are missing, without producing many errors.",
"title": ""
},
{
"docid": "5759152f6e9a9cb1e6c72857e5b3ec54",
"text": "Deep multitask networks, in which one neural network produces multiple predictive outputs, can offer better speed and performance than their single-task counterparts but are challenging to train properly. We present a gradient normalization (GradNorm) algorithm that automatically balances training in deep multitask models by dynamically tuning gradient magnitudes. We show that for various network architectures, for both regression and classification tasks, and on both synthetic and real datasets, GradNorm improves accuracy and reduces overfitting across multiple tasks when compared to single-task networks, static baselines, and other adaptive multitask loss balancing techniques. GradNorm also matches or surpasses the performance of exhaustive grid search methods, despite only involving a single asymmetry hyperparameter α. Thus, what was once a tedious search process that incurred exponentially more compute for each task added can now be accomplished within a few training runs, irrespective of the number of tasks. Ultimately, we will demonstrate that gradient manipulation affords us great control over the training dynamics of multitask networks and may be one of the keys to unlocking the potential of multitask learning.",
"title": ""
},
{
"docid": "4b570eb16d263b2df0a8703e9135f49c",
"text": "ions. They also presume that consumers carefully calculate the give and get components of value, an assumption that did not hold true for most consumers in the exploratory study. Price as a Quality Indicator Most experimental studies related to quality have focused on price as the key extrinsic quality signal. As suggested in the propositions, price is but one of several potentially useful extrinsic cues; brand name or package may be equally or more important, especially in packaged goods. Further, evidence of a generalized price-perceived quality relationship is inconclusive. Quality research may benefit from a de-emphasis on price as the main extrinsic quality indicator. Inclusion of other important indicators, as well as identification of situations in which each of those indicators is important, may provide more interesting and useful answers about the extrinsic signals consumers use. Management Implications An understanding of what quality and value mean to consumers offers the promise of improving brand positions through more precise market analysis and segmentation, product planning, promotion, and pricing strategy. The model presented here suggests the following strategies that can be implemented to understand and capitalize on brand quality and value. Close the Quality Perception Gap Though managers increasingly acknowledge the importance of quality, many continue to define and measure it from the company's perspective. Closing the gap between objective and perceived quality requires that the company view quality the way the consumer does. Research that investigates which cues are important and how consumers form impressions of qualConsumer Perceptions of Price, Quality, and Value / 17 ity based on those technical, objective cues is necessary. Companies also may benefit from research that identifies the abstract dimensions of quality desired by consumers in a product class. Identify Key Intrinsic and Extrinsic Attribute",
"title": ""
},
{
"docid": "3571e2646d76d5f550075952cb75ba30",
"text": "Traditional simultaneous localization and mapping (SLAM) algorithms have been used to great effect in flat, indoor environments such as corridors and offices. We demonstrate that with a few augmentations, existing 2D SLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments. In particular, we use data acquired with a 3D laser range finder. We use a simple segmentation algorithm to separate the data stream into distinct point clouds, each referenced to a vehicle position. The SLAM technique we then adopt inherits much from 2D delayed state (or scan-matching) SLAM in that the state vector is an ever growing stack of past vehicle positions and inter-scan registrations are used to form measurements between them. The registration algorithm used is a novel combination of previous techniques carefully balancing the need for maximally wide convergence basins, robustness and speed. In addition, we introduce a novel post-registration classification technique to detect matches which have converged to incorrect local minima",
"title": ""
},
{
"docid": "74af5749afb36c63dbf38bb8118807c9",
"text": "Modern mobile platforms like Android enable applications to read aggregate power usage on the phone. This information is considered harmless and reading it requires no user permission or notification. We show that by simply reading the phone’s aggregate power consumption over a period of a few minutes an application can learn information about the user’s location. Aggregate phone power consumption data is extremely noisy due to the multitude of components and applications that simultaneously consume power. Nevertheless, by using machine learning algorithms we are able to successfully infer the phone’s location. We discuss several ways in which this privacy leak can be remedied.",
"title": ""
},
{
"docid": "0a7f93e98e1d256ea6a4400f33753d6a",
"text": "In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds a polygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensorplacement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the nextbest-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The International Journal of Robotics Research Vol. 21, No. 10–11, October-November 2002, pp. 829-848, ©2002 Sage Publications The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs. KEY WORDS—next-best view, safe region, online exploration, incidence constraints, map building",
"title": ""
},
{
"docid": "3192a76e421d37fbe8619a3bc01fb244",
"text": "• Develop and implement an internally consistent set of goals and functional policies (this is, a solution to the agency problem) • These internally consistent set of goals and policies aligns the firm’s strengths and weaknesses with external (industry) opportunities and threats (SWOT) in a dynamic balance • The firm’s strategy has to be concerned with the exploitation of its “distinctive competences” (early reference to RBV)",
"title": ""
},
{
"docid": "4b30695ba1989cb6770a38afca685aaa",
"text": "Prior literature on search advertising primarily assumes that search engines know advertisers’ click-through rates, the probability that a consumer clicks on an advertiser’s ad. This information, however, is not available when a new advertiser starts search advertising for the first time. In particular, a new advertiser’s click-through rate can be learned only if the advertiser’s ad is shown to enough consumers, i.e., the advertiser wins enough auctions. Since search engines use advertisers’ expected click-through rates when calculating payments and allocations, the lack of information about a new advertiser can affect new and existing advertisers’ bidding strategies. In this paper, we use a game theory model to analyze advertisers’ strategies, their payoffs, and the search engine’s revenue when a new advertiser joins the market. Our results indicate that a new advertiser should always bid higher (sometimes above its valuation) when it starts search advertising. However, the strategy of an existing advertiser, i.e., an incumbent, depends on its valuation and click-through rate. A strong incumbent increases its bid to prevent the search engine from learning the new advertiser’s clickthrough rate, whereas a weak incumbent decreases its bid to facilitate the learning process. Interestingly, we find that, under certain conditions, the search engine benefits from not knowing the new advertiser’s click-through rate because its ignorance could induce the advertisers to bid more aggressively. Nonetheless, the search engine’s revenue sometimes decreases because of this lack of information, particularly, when the incumbent is sufficiently strong. We show that the search engine can mitigate this loss, and improve its total profit, by offering free advertising credit to new advertisers.",
"title": ""
}
] |
scidocsrr
|
2fc1c3d9d5b302e82ab59834f7fedb89
|
Artificial Intelligence in Hypertension Diagnosis : A Review
|
[
{
"docid": "ba850aaec32b6ddc6eba23973d1e1608",
"text": "Data mining techniques have been widely used in clinical decision support systems for prediction and diagnosis of various diseases with good accuracy. These techniques have been very effective in designing clinical support systems because of their ability to discover hidden patterns and relationships in medical data. One of the most important applications of such systems is in diagnosis of heart diseases because it is one of the leading causes of deaths all over the world. Almost all systems that predict heart diseases use clinical dataset having parameters and inputs from complex tests conducted in labs. None of the system predicts heart diseases based on risk factors such as age, family history, diabetes, hypertension, high cholesterol, tobacco smoking, alcohol intake, obesity or physical inactivity, etc. Heart disease patients have lot of these visible risk factors in common which can be used very effectively for diagnosis. System based on such risk factors would not only help medical professionals but it would give patients a warning about the probable presence of heart disease even before he visits a hospital or goes for costly medical checkups. Hence this paper presents a technique for prediction of heart disease using major risk factors. This technique involves two most successful data mining tools, neural networks and genetic algorithms. The hybrid system implemented uses the global optimization advantage of genetic algorithm for initialization of neural network weights. The learning is fast, more stable and accurate as compared to back propagation. The system was implemented in Matlab and predicts the risk of heart disease with an accuracy of 89%.",
"title": ""
}
] |
[
{
"docid": "94aa0777f80aa25ec854f159dc3e0706",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "19222de066550a2d27fc81b12c020d51",
"text": "Our purpose in this research is to develop a methodology to automatically and efficiently classify web images as UML static diagrams, and to produce a computer tool that implements this function. The tool receives as input a bitmap file (in different formats) and tells whether the image corresponds to a diagram. The tool does not require that the images are explicitly or implicitly tagged as UML diagrams. The tool extracts graphical characteristics from each image (such as grayscale histogram, color histogram and elementary geometric forms) and uses a combination of rules to classify it. The rules are obtained with machine learning techniques (rule induction) from a sample of 19000 web images manually classified by experts. In this work we do not consider the textual contents of the images.",
"title": ""
},
{
"docid": "f4a0738d814e540f7c208ab1e3666fb7",
"text": "In this paper, we analyze a generic algorithm scheme for sequential global optimization using Gaussian processes. The upper bounds we derive on the cumulative regret for this generic algorithm improve by an exponential factor the previously known bounds for algorithms like GP-UCB. We also introduce the novel Gaussian Process Mutual Information algorithm (GP-MI), which significantly improves further these upper bounds for the cumulative regret. We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic. Preprint for the 31st International Conference on Machine Learning (ICML 2014) 1 ar X iv :1 31 1. 48 25 v3 [ st at .M L ] 8 J un 2 01 5 Erratum After the publication of our article, we found an error in the proof of Lemma 1 which invalidates the main theorem. It appears that the information given to the algorithm is not sufficient for the main theorem to hold true. The theoretical guarantees would remain valid in a setting where the algorithm observes the instantaneous regret instead of noisy samples of the unknown function. We describe in this page the mistake and its consequences. Let f : X → R be the unknown function to be optimized, which is a sample from a Gaussian process. Let’s fix x, x1, . . . , xT ∈ X and the observations yt = f(xt)+ t where the noise variables t are independent Gaussian noise N (0, σ). We define the instantaneous regret rt = f(x?)− f(xt) and, MT = T ∑",
"title": ""
},
{
"docid": "0add9f22db24859da50e1a64d14017b9",
"text": "Light field imaging offers powerful new capabilities through sophisticated digital processing techniques that are tightly merged with unconventional optical designs. This combination of imaging technology and computation necessitates a fundamentally different view of the optical properties of imaging systems and poses new challenges for the traditional signal and image processing domains. In this article, we aim to provide a comprehensive review of the considerations involved and the difficulties encountered in working with light field data.",
"title": ""
},
{
"docid": "f4d44bbbb5bc6ff2a8128ba50b4c8aaa",
"text": "In order to obtain a temperature range of a pasteurization process, a good controller that can reject the unidentified disturbance which may occur at any time is needed. In this paper, control structure of both multi-loop and cascade controllers are designed for a pasteurization mini plant Armfied PCT23 MKIL The control algorithm uses proportional-integral-derivative (PID) controller. Some tuning methods are simulated to obtain the best controller performance. The two controllers are simulated and tested on real plant and their performances are compared. From experiments, it is found that the multiloop controller has a superior set point tracking performance whereas the cascade controller is better for disturbance rejection.",
"title": ""
},
{
"docid": "e35f6f4e7b6589e992ceeccb4d25c9f1",
"text": "One of the key success factors of lending organizations in general and banks in particular is the assessment of borrower credit worthiness in advance during the credit evaluation process. Credit scoring models have been applied by many researchers to improve the process of assessing credit worthiness by differentiating between prospective loans on the basis of the likelihood of repayment. Thus, credit scoring is a very typical Data Mining (DM) classification problem. Many traditional statistical and modern computational intelligence techniques have been presented in the literature to tackle this problem. The main objective of this paper is to describe an experiment of building suitable Credit Scoring Models (CSMs) for the Sudanese banks. Two commonly discussed data mining classification techniques are chosen in this paper namely: Decision Tree (DT) and Artificial Neural Networks (ANN). In addition Genetic Algorithms (GA) and Principal Component Analysis (PCA) are also applied as feature selection techniques. In addition to a Sudanese credit dataset, German credit dataset is also used to evaluate these techniques. The results reveal that ANN models outperform DT models in most cases. Using GA as a feature selection is more effective than PCA technique. The highest accuracy of German data set (80.67%) and Sudanese credit scoring models (69.74%) are achieved by a hybrid GA-ANN model. Although DT and its hybrid models (PCA-DT, GA-DT) are outperformed by ANN and its hybrid models (PCA-ANN, GA-ANN) in most cases, they produced interpretable loan granting decisions.",
"title": ""
},
{
"docid": "9ac16df20364b0ae28d3164bbfb08654",
"text": "Complex event detection is an advanced form of data stream processing where the stream(s) are scrutinized to identify given event patterns. The challenge for many complex event processing (CEP) systems is to be able to evaluate event patterns on high-volume data streams while adhering to realtime constraints. To solve this problem, in this paper we present a hardware based complex event detection system implemented on field-programmable gate arrays (FPGAs). By inserting the FPGA directly into the data path between the network interface and the CPU, our solution can detect complex events at gigabit wire speed with constant and fully predictable latency, independently of network load, packet size or data distribution. This is a significant improvement over CPU based systems and an architectural approach that opens up interesting opportunities for hybrid stream engines that combine the flexibility of the CPU with the parallelism and processing power of FPGAs.",
"title": ""
},
{
"docid": "b63ef33cde2d725944f2fa249e48b9f8",
"text": "We introduce eyeglasses that present haptic feedback when using gaze gestures for input. The glasses utilize vibrotactile actuators to provide gentle stimulation to three locations on the user's head. We describe two initial user studies that were conducted to evaluate the easiness of recognizing feedback locations and participants' preferences for combining the feedback with gaze gestures. The results showed that feedback from a single actuator was the easiest to recognize and also preferred when used with gaze gestures. We conclude by presenting future use scenarios that could benefit from gaze gestures and haptic feedback.",
"title": ""
},
{
"docid": "a9e454767906f4ced5876ee73f3a4671",
"text": "Smart solutions for water quality monitoring are gaining importance with advancement in communication technology. This paper presents a detailed overview of recent works carried out in the field of smart water quality monitoring. Also, a power efficient, simpler solution for in-pipe water quality monitoring based on Internet of Things technology is presented. The model developed is used for testing water samples and the data uploaded over the Internet are analyzed. The system also provides an alert to a remote user, when there is a deviation of water quality parameters from the pre-defined set of standard values.",
"title": ""
},
{
"docid": "80ae8494ba7ebc70e9454d68f4dc5cbd",
"text": "Advanced deep learning methods have been developed to conduct prostate MR volume segmentation in either a 2D or 3D fully convolutional manner. However, 2D methods tend to have limited segmentation performance, since large amounts of spatial information of prostate volumes are discarded during the slice-by-slice segmentation process; and 3D methods also have room for improvement, since they use isotropic kernels to perform 3D convolutions whereas most prostate MR volumes have anisotropic spatial resolution. Besides, the fully convolutional structural methods achieve good performance for localization issues but neglect the per-voxel classification for segmentation tasks. In this paper, we propose a 3D Global Convolutional Adversarial Network (3D GCA-Net) to address efficient prostate MR volume segmentation. We first design a 3D ResNet encoder to extract 3D features from prostate scans, and then develop the decoder, which is composed of a multi-scale 3D global convolutional block and a 3D boundary refinement block, to address the classification and localization issues simultaneously for volumetric segmentation. Additionally, we combine the encoder-decoder segmentation network with an adversarial network in the training phrase to enforce the contiguity of long-range spatial predictions. Throughout the proposed model, we use anisotropic convolutional processing for better feature learning on prostate MR scans. We evaluated our 3D GCA-Net model on two public prostate MR datasets and achieved state-of-the-art performances.",
"title": ""
},
{
"docid": "6816bb15dba873244306f22207525bee",
"text": "Imbalance suggests a feeling of dynamism and movement in static objects. It is therefore not surprising that many 3D models stand in impossibly balanced configurations. As long as the models remain in a computer this is of no consequence: the laws of physics do not apply. However, fabrication through 3D printing breaks the illusion: printed models topple instead of standing as initially intended. We propose to assist users in producing novel, properly balanced designs by interactively deforming an existing model. We formulate balance optimization as an energy minimization, improving stability by modifying the volume of the object, while preserving its surface details. This takes place during interactive editing: the user cooperates with our optimizer towards the end result. We demonstrate our method on a variety of models. With our technique, users can produce fabricated objects that stand in one or more surprising poses without requiring glue or heavy pedestals.",
"title": ""
},
{
"docid": "de05e649c6e77278b69665df3583d3d8",
"text": "This context-aware emotion-based model can help design intelligent agents for group decision making processes. Experiments show that agents with emotional awareness reach agreement more quickly than those without it.",
"title": ""
},
{
"docid": "e9bc802e8ce6a823526084c82aa89c95",
"text": "Non-orthogonal multiple access (NOMA) is a promising radio access technique for further cellular enhancements toward 5G. Single-user multiple-input multiple-output (SU-MIMO) is one of the key technologies in LTE /LTE-Advanced systems. Thus, it is of great interest to study how to efficiently and effectively combine NOMA and SU-MIMO techniques together for further system performance improvement. This paper investigates the combination of NOMA with open-loop and closed-loop SU-MIMO. The key issues involved in the combination are presented and discussed, including scheduling algorithm, successive interference canceller (SIC) order determination, transmission power assignment and feedback design. The performances of NOMA with SU-MIMO are investigated by system-level simulations with very practical assumptions. Simulation results show that compared to orthogonal multiple access system, NOMA can achieve large performance gains both open-loop and closed-loop SU-MIMO, which are about 23% for cell average throughput and 33% for cell-edge user throughput.",
"title": ""
},
{
"docid": "1dd15eb76573cb6362e9efde9f5631e5",
"text": "Research on API migration and language conversion can be informed by empirical data about API usage. For instance, such data may help with designing and defending mapping rules for API migration in terms of relevance and applicability. We describe an approach to large-scale API-usage analysis of open-source Java projects, which we also instantiate for the Source-Forge open-source repository in a certain way. Our approach covers checkout, building, tagging with metadata, fact extraction, analysis, and synthesis with a large degree of automation. Fact extraction relies on resolved (type-checked) ASTs. We describe a few examples of API-usage analysis; they are motivated by API migration. These examples are concerned with analysing API footprint (such as the numbers of distinct APIs used in a project), API coverage (such as the percentage of methods of an API used in a corpus), and framework-like vs. class-library-like usage.",
"title": ""
},
{
"docid": "77c922c3d2867fa7081a9f18ae0b1151",
"text": "The failure of critical components in industrial systems may have negative consequences on the availability, the productivity, the security and the environment. To avoid such situations, the health condition of the physical system, and particularly of its critical components, can be constantly assessed by using the monitoring data to perform on-line system diagnostics and prognostics. The present paper is a contribution on the assessment of the health condition of a Computer Numerical Control (CNC) tool machine and the estimation of its Remaining Useful Life (RUL). The proposed method relies on two main phases: an off-line phase and an on-line phase. During the first phase, the raw data provided by the sensors are processed to extract reliable features. These latter are used as inputs of learning algorithms in order to generate the models that represent the wear’s behavior of the cutting tool. Then, in the second phase, which is an assessment one, the constructed models are exploited to identify the tool’s current health state, predict its RUL and the associated confidence bounds. The proposed method is applied on a benchmark of condition monitoring data gathered during several cuts of a CNC tool. Simulation results are obtained and discussed at the end of the paper.",
"title": ""
},
{
"docid": "2b51fdb5800a95b31fa5c2cff493ad80",
"text": "An auditory-based feature extraction algorithm is presented. We name the new features as cochlear filter cepstral coefficients (CFCCs) which are defined based on a recently developed auditory transform (AT) plus a set of modules to emulate the signal processing functions in the cochlea. The CFCC features are applied to a speaker identification task to address the acoustic mismatch problem between training and testing environments. Usually, the performance of acoustic models trained in clean speech drops significantly when tested in noisy speech. The CFCC features have shown strong robustness in this kind of situation. In our experiments, the CFCC features consistently perform better than the baseline MFCC features under all three mismatched testing conditions-white noise, car noise, and babble noise. For example, in clean conditions, both MFCC and CFCC features perform similarly, over 96%, but when the signal-to-noise ratio (SNR) of the input signal is 6 dB, the accuracy of the MFCC features drops to 41.2%, while the CFCC features still achieve an accuracy of 88.3%. The proposed CFCC features also compare favorably to perceptual linear predictive (PLP) and RASTA-PLP features. The CFCC features consistently perform much better than PLP. Under white noise, the CFCC features are significantly better than RASTA-PLP, while under car and babble noise, the CFCC features provide similar performances to RASTA-PLP.",
"title": ""
},
{
"docid": "aba7cb0f5f50a062c42b6b51457eb363",
"text": "Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin’s role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student’s predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.",
"title": ""
},
{
"docid": "c14da39ea48b06bfb01c6193658df163",
"text": "We present FingerPad, a nail-mounted device that turns the tip of the index finger into a touchpad, allowing private and subtle interaction while on the move. FingerPad enables touch input using magnetic tracking, by adding a Hall sensor grid on the index fingernail, and a magnet on the thumbnail. Since it permits input through the pinch gesture, FingerPad is suitable for private use because the movements of the fingers in a pinch are subtle and are naturally hidden by the hand. Functionally, FingerPad resembles a touchpad, and also allows for eyes-free use. Additionally, since the necessary devices are attached to the nails, FingerPad preserves natural haptic feedback without affecting the native function of the fingertips. Through user study, we analyze the three design factors, namely posture, commitment method and target size, to assess the design of the FingerPad. Though the results show some trade-off among the factors, generally participants achieve 93% accuracy for very small targets (1.2mm-width) in the seated condition, and 92% accuracy for 2.5mm-width targets in the walking condition.",
"title": ""
},
{
"docid": "61ad35eaee012d8c1bddcaeee082fa22",
"text": "For realistic simulation it is necessary to thoroughly define and describe light-source characteristics¿especially the light-source geometry and the luminous intensity distribution.",
"title": ""
}
] |
scidocsrr
|
c17846ea6c9c2f0ac8c1637b7c103d60
|
Haptic feedback in mixed-reality environment
|
[
{
"docid": "d2f36cc750703f5bbec2ea3ef4542902",
"text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …",
"title": ""
}
] |
[
{
"docid": "b5e0faba5be394523d10a130289514c2",
"text": "Child neglect results from either acts of omission or of commission. Fatalities from neglect account for 30% to 40% of deaths caused by child maltreatment. Deaths may occur from failure to provide the basic needs of infancy such as food or medical care. Medical care may also be withheld because of parental religious beliefs. Inadequate supervision may contribute to a child's injury or death through adverse events involving drowning, fires, and firearms. Recognizing the factors contributing to a child's death is facilitated by the action of multidisciplinary child death review teams. As with other forms of child maltreatment, prevention and early intervention strategies are needed to minimize the risk of injury and death to children.",
"title": ""
},
{
"docid": "36f960b37e7478d8ce9d41d61195f83a",
"text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.",
"title": ""
},
{
"docid": "01b73e9e8dbaf360baad38b63e5eae82",
"text": "Received: 29 September 2009 Revised: 19 April 2010 2nd Revision: 5 July 2010 3rd Revision: 30 November 2010 Accepted: 8 December 2010 Abstract Throughout the world, sensitive personal information is now protected by regulatory requirements that have translated into significant new compliance oversight responsibilities for IT managers who have a legal mandate to ensure that individual employees are adequately prepared and motivated to observe policies and procedures designed to ensure compliance. This research project investigates the antecedents of information privacy policy compliance efficacy by individuals. Using Health Insurance Portability and Accountability Act compliance within the healthcare industry as a practical proxy for general organizational privacy policy compliance, the results of this survey of 234 healthcare professionals indicate that certain social conditions within the organizational setting (referred to as external cues and comprising situational support, verbal persuasion, and vicarious experience) contribute to an informal learning process. This process is distinct from the formal compliance training procedures and is shown to influence employee perceptions of efficacy to engage in compliance activities, which contributes to behavioural intention to comply with information privacy policies. Implications for managers and researchers are discussed. European Journal of Information Systems (2011) 20, 267–284. doi:10.1057/ejis.2010.72; published online 25 January 2011",
"title": ""
},
{
"docid": "42bf428e3c6a4b3c4cb46a2735de872d",
"text": "We have developed a low cost software radio based platform for monitoring EPC Gen 2 RFID traffic. The Gen 2 standard allows for a range of PHY layer configurations and does not specify exactly how to compose protocol messages to inventory tags. This has made it difficult to know how well the standard works, and how it is implemented in practice. Our platform provides much needed visibility into Gen 2 systems by capturing reader transmissions using the USRP2 and decoding them in real-time using software we have developed and released to the public. In essence, our platform delivers much of the functionality of expensive (< $50,000) conformance testing products, with greater extensibility at a small fraction of the cost. In this paper, we present the design and implementation of the platform and evaluate its effectiveness, showing that it has better than 99% accuracy up to 3 meters. We then use the platform to study a commercial RFID reader, showing how the Gen 2 standard is realized, and indicate avenues for research at both the PHY and MAC layers.",
"title": ""
},
{
"docid": "ad1cf5892f7737944ba23cd2e44a7150",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "d3dde75d07ad4ed79ff1da2c3a601e1d",
"text": "In open trials, 1-Hz repetitive transcranial magnetic stimulation (rTMS) to the supplementary motor area (SMA) improved symptoms and normalized cortical hyper-excitability of patients with obsessive-compulsive disorder (OCD). Here we present the results of a randomized sham-controlled double-blind study. Medication-resistant OCD patients (n=21) were assigned 4 wk either active or sham rTMS to the SMA bilaterally. rTMS parameters consisted of 1200 pulses/d, at 1 Hz and 100% of motor threshold (MT). Eighteen patients completed the study. Response to treatment was defined as a > or = 25% decrease on the Yale-Brown Obsessive Compulsive Scale (YBOCS). Non-responders to sham and responders to active or sham rTMS were offered four additional weeks of open active rTMS. After 4 wk, the response rate in the completer sample was 67% (6/9) with active and 22% (2/9) with sham rTMS. At 4 wk, patients receiving active rTMS showed on average a 25% reduction in the YBOCS compared to a 12% reduction in those receiving sham. In those who received 8-wk active rTMS, OCD symptoms improved from 28.2+/-5.8 to 14.5+/-3.6. In patients randomized to active rTMS, MT measures on the right hemisphere increased significantly over time. At the end of 4-wk rTMS the abnormal hemispheric laterality found in the group randomized to active rTMS normalized. The results of the first randomized sham-controlled trial of SMA stimulation in the treatment of resistant OCD support further investigation into the potential therapeutic applications of rTMS in this disabling condition.",
"title": ""
},
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "df1c6a5325dae7159b5bdf5dae65046d",
"text": "Researchers from a wide range of management areas agree that conflicts are an important part of organizational life and that their study is important. Yet, interpersonal conflict is a neglected topic in information system development (ISD). Based on definitional properties of interpersonal conflict identified in the management and organizational behavior literatures, this paper presents a model of how individuals participating in ISD projects perceive conflict and its influence on ISD outcomes. Questionnaire data was obtained from 265 IS staff (main sample) and 272 users (confirmatory sample) working on 162 ISD projects. Results indicated that the construct of interpersonal conflict was reflected by three key dimensions: disagreement, interference, and negative emotion. While conflict management was found to have positive effects on ISD outcomes, it did not substantially mitigate the negative effects of interpersonal conflict on these outcomes. In other words, the impact of interpersonal conflict was perceived to be negative, regardless of how it was managed or resolved.",
"title": ""
},
{
"docid": "5ef49933bc344b76907c271bd832cff0",
"text": "Because music conveys and evokes feelings, a wealth of research has been performed on music emotion recognition. Previous research has shown that musical mood is linked to features based on rhythm, timbre, spectrum and lyrics. For example, sad music correlates with slow tempo, while happy music is generally faster. However, only limited success has been obtained in learning automatic classifiers of emotion in music. In this paper, we collect a ground truth data set of 2904 songs that have been tagged with one of the four words “happy”, “sad”, “angry” and “relaxed”, on the Last.FM web site. An excerpt of the audio is then retrieved from 7Digital.com, and various sets of audio features are extracted using standard algorithms. Two classifiers are trained using support vector machines with the polynomial and radial basis function kernels, and these are tested with 10-fold cross validation. Our results show that spectral features outperform those based on rhythm, dynamics, and, to a lesser extent, harmony. We also find that the polynomial kernel gives better results than the radial basis function, and that the fusion of different feature sets does not always lead to improved classification.",
"title": ""
},
{
"docid": "289b94393191d793e3f4b79787f61d7d",
"text": "Plug-in hybrid electric vehicles (PHEVs) will play a vital role in future sustainable transportation systems due to their potential in terms of energy security, decreased environmental impact, improved fuel economy, and better performance. Moreover, new regulations have been established to improve the collective gas mileage, cut greenhouse gas emissions, and reduce dependence on foreign oil. This paper primarily focuses on two major thrust areas of PHEVs. First, it introduces a grid-friendly bidirectional alternating current/direct current ac/dc-dc/ac rectifier/inverter for facilitating vehicle-to-grid (V2G) integration of PHEVs. Second, it presents an integrated bidirectional noninverted buck-boost converter that interfaces the energy storage device of the PHEV to the dc link in both grid-connected and driving modes. The proposed bidirectional converter has minimal grid-level disruptions in terms of power factor and total harmonic distortion, with less switching noise. The integrated bidirectional dc/dc converter assists the grid interface converter to track the charge/discharge power of the PHEV battery. In addition, while driving, the dc/dc converter provides a regulated dc link voltage to the motor drive and captures the braking energy during regenerative braking.",
"title": ""
},
{
"docid": "cc2e24cd04212647f1c29482aa12910d",
"text": "A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.",
"title": ""
},
{
"docid": "2c9e59fbd7d6dd7254c9a055e6e789ca",
"text": "In this thesis we address the problem of perception, modeling, and use of context information from body-worn sensors for wearable computers. A context model is an abstraction of the user’s situation that is intelligible to the user, perceivable by sensors, and on the right level of abstraction, so that applications can use it to adapt their behavior. The issues of perception, modeling, and use of context information are thus strongly interrelated. Embedded in two application scenarios, we make contributions to the extraction of context from acceleration and audio sensors, the modeling of human interruptibility, and it’s estimation from acceleration, audio, and location sensors. We investigate the extraction of context from acceleration and audio data. We use body-worn acceleration sensors to classify the user’s physical activity. We developed a sensing platform which allows to record data from 12 three-dimensional acceleration sensors distributed over the body of the user. We classify activity of different complexity, such as sitting, walking, and writing on a white board, using a naïve Bayes’ classifier. We investigate which sensor placement on the body is best for recognizing such activities. We use auditory scene classification to extract context information about the social situation of the user. We classify the auditory scene of the user in street, restaurant, lecture, and conversation (plus a garbage class). We investigate which features are best suited for such a classification, and which feature selection mechanisms, sampling rates, and recognition windows are appropriate. The first application scenario is a meeting recorder that records not only audio and video of a meeting, but also additional, personal annotations from the user’s context. In this setting we make first contributions to the extraction of context information. We use acceleration sensors for simple activity recognition, and audio to identify different speakers in the recording and thus infer the flow of discussion and presentation. For the second application scenario, the estimation of the user’s interruptibility for automatic mediation of notifications, we developed a (context) model of human interruptibility. It distinguishes between the interruptibility of the user and that of the environment. We evaluate the model in a user study. We propose an algorithm to estimate the interruptibility within this model from sensor data. It combines low-level context information using so-called tendencies. A first version works on context from classifers trained in a supervised manner and uses hand-crafted tendencies. Although the algorithm produces good results with some 88-92% recognition score, it does not scale to large numbers of low-level contexts limiting the extensibility of the system. An improved version uses automatically found low-level contexts and learns the tendencies automatically. It thus allows to easily add new sensors and to adapt the system during run-time. We evaluated the algorithm on a data set of up to two days and obtained recognition scores of 90-97%.",
"title": ""
},
{
"docid": "483b57bef1158ae37c43ca9a92c1cda3",
"text": "Recently, advanced driver assistance system (ADAS) has attracted a lot of attention due to the fast growing industry of smart cars, which is believed to be the next human-computer interaction after smart phones. As ADAS is a critical enabling component in a human-in-the-loop cyber-physical system (CPS) involving complicated physical environment, it has stringent requirements on reliability, accuracy as well as latency. Lane and vehicle detections are the basic functions in ADAS, which provide lane departure warning (LDW) and forward collision warning (FCW) to predict the dangers and warn the drivers. While extensive literature exists on this topic, none of them considers the important fact that many vehicles today do not have powerful embedded electronics or cameras. It will be costly to upgrade the vehicle just for ADAS enhancement. To address this issue, we demonstrate a new framework that utilizes microprocessors in mobile devices with embedded cameras for advanced driver assistance. The main challenge that comes with this low cost solution is the dilemma between limited computing power and tight latency requirement, and uncalibrated camera and high accuracy requirement. Accordingly, we propose an efficient, accurate, flexible yet light-weight real-time lane and vehicle detection method and implement it on Android devices. Real road test results suggest that an average latency of 15 fps can be achieved with a high accuracy of 12.58 average pixel offset for each lane in all scenarios and 97+ precision for vehicle detection. To the best of the authors' knowledge, this is the very first implementation of both lane and vehicle detections on mobile devices with un-calibrated embedded camera.",
"title": ""
},
{
"docid": "39492127ee68a86b33a8a120c8c79f5d",
"text": "The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies to a more general class of convex and nonsmooth objective functions, beyond the smooth and separable least squares loss used in lasso. We also demonstrate the rates of convergence for our algorithm under various structural assumptions of the stochastic function: O(1/ √ t) for convex functions and O(log t/t) for strongly convex functions. Compared to previous literature, we establish the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation. A novel application named GraphGuided SVM is proposed to demonstrate the usefulness of our algorithm.",
"title": ""
},
{
"docid": "574aca6aa63dd17949fcce6a231cf2d3",
"text": "This paper presents an algorithm for segmenting the hair region in uncontrolled, real life conditions images. Our method is based on a simple statistical hair shape model representing the upper hair part. We detect this region by minimizing an energy which uses active shape and active contour. The upper hair region then allows us to learn the hair appearance parameters (color and texture) for the image considered. Finally, those parameters drive a pixel-wise segmentation technique that yields the desired (complete) hair region. We demonstrate the applicability of our method on several real images.",
"title": ""
},
{
"docid": "05da057559ac24f6780801aebd49cd48",
"text": "The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection.",
"title": ""
},
{
"docid": "a5090b67307b2efa1f8ae7d6a212a6ff",
"text": "Providing highly flexible connectivity is a major architectural challenge for hardware implementation of reconfigurable neural networks. We perform an analytical evaluation and comparison of different configurable interconnect architectures (mesh NoC, tree, shared bus and point-to-point) emulating variants of two neural network topologies (having full and random configurable connectivity). We derive analytical expressions and asymptotic limits for performance (in terms of bandwidth) and cost (in terms of area and power) of the interconnect architectures considering three communication methods (unicast, multicast and broadcast). It is shown that multicast mesh NoC provides the highest performance/cost ratio and consequently it is the most suitable interconnect architecture for configurable neural network implementation. Routing table size requirements and their impact on scalability were analyzed. Modular hierarchical architecture based on multicast mesh NoC is proposed to allow large scale neural networks emulation. Simulation results successfully validate the analytical models and the asymptotic behavior of the network as a function of its size. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "14ca9dfee206612e36cd6c3b3e0ca61e",
"text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.",
"title": ""
},
{
"docid": "09f3bb814e259c74f1c42981758d5639",
"text": "PURPOSE OF REVIEW\nThe application of artificial intelligence in the diagnosis of obstructive lung diseases is an exciting phenomenon. Artificial intelligence algorithms work by finding patterns in data obtained from diagnostic tests, which can be used to predict clinical outcomes or to detect obstructive phenotypes. The purpose of this review is to describe the latest trends and to discuss the future potential of artificial intelligence in the diagnosis of obstructive lung diseases.\n\n\nRECENT FINDINGS\nMachine learning has been successfully used in automated interpretation of pulmonary function tests for differential diagnosis of obstructive lung diseases. Deep learning models such as convolutional neural network are state-of-the art for obstructive pattern recognition in computed tomography. Machine learning has also been applied in other diagnostic approaches such as forced oscillation test, breath analysis, lung sound analysis and telemedicine with promising results in small-scale studies.\n\n\nSUMMARY\nOverall, the application of artificial intelligence has produced encouraging results in the diagnosis of obstructive lung diseases. However, large-scale studies are still required to validate current findings and to boost its adoption by the medical community.",
"title": ""
}
] |
scidocsrr
|
74d78b9a39c9f1643fa0ccce7a0fdf83
|
TRESOR-HUNT: attacking CPU-bound encryption
|
[
{
"docid": "14dd650afb3dae58ffb1a798e065825a",
"text": "Copilot is a coprocessor-based kernel integrity monitor for commodity systems. Copilot is designed to detect malicious modifications to a host’s kernel and has correctly detected the presence of 12 real-world rootkits, each within 30 seconds of their installation with less than a 1% penalty to the host’s performance. Copilot requires no modifications to the protected host’s software and can be expected to operate correctly even when the host kernel is thoroughly compromised – an advantage over traditional monitors designed to run on the host itself.",
"title": ""
},
{
"docid": "14a5714f38f355fa967f1b3a4789f0f1",
"text": "Disk encryption has become an important security measure for a multitude of clients, including governments, corporations, activists, security-conscious professionals, and privacy-conscious individuals. Unfortunately, recent research has discovered an effective side channel attack against any disk mounted by a running machine [23]. This attack, known as the cold boot attack, is effective against any mounted volume using state-of-the-art disk encryption, is relatively simple to perform for an attacker with even rudimentary technical knowledge and training, and is applicable to exactly the scenario against which disk encryption is primarily supposed to defend: an adversary with physical access.\n While there has been some previous work in defending against this attack [27], the only currently available solution suffers from the twin problems of disabling access to the SSE registers and supporting only a single encrypted volume, hindering its usefulness for such common encryption scenarios as data and swap partitions encrypted with different keys (the swap key being a randomly generated throw-away key). We present Loop-Amnesia, a kernel-based disk encryption mechanism implementing a novel technique to eliminate vulnerability to the cold boot attack. We contribute a novel technique for shielding multiple encryption keys from RAM and a mechanism for storing encryption keys inside the CPU that does not interfere with the use of SSE. We offer theoretical justification of Loop-Amnesia's invulnerability to the attack, verify that our implementation is not vulnerable in practice, and present measurements showing our impact on I/O accesses to the encrypted disk is limited to a slowdown of approximately 2x. Loop-Amnesia is written for x86-64, but our technique is applicable to other register-based architectures. We base our work on loop-AES, a state-of-the-art open source disk encryption package for Linux.",
"title": ""
}
] |
[
{
"docid": "038c4b82654b3de5c6b49644942c77d6",
"text": "Continuous improvement of business processes is a challenging task that requires complex and robust supporting systems. Using advanced analytics methods and emerging technologies--such as business intelligence systems, business activity monitoring, predictive analytics, behavioral pattern recognition, and \"type simulations\"--can help business users continuously improve their processes. However, the high volumes of event data produced by the execution of processes during the business lifetime prevent business users from efficiently accessing timely analytics data. This article presents a technological solution using a big data approach to provide business analysts with visibility on distributed process and business performance. The proposed architecture lets users analyze business performance in highly distributed environments with a short time response. This article is part of a special issue on leveraging big data and business analytics.",
"title": ""
},
{
"docid": "7ea777ccae8984c26317876d804c323c",
"text": "The CRISPR/Cas (clustered regularly interspaced short palindromic repeats/CRISPR-associated proteins) system was first identified in bacteria and archaea and can degrade exogenous substrates. It was developed as a gene editing technology in 2013. Over the subsequent years, it has received extensive attention owing to its easy manipulation, high efficiency, and wide application in gene mutation and transcriptional regulation in mammals and plants. The process of CRISPR/Cas is optimized constantly and its application has also expanded dramatically. Therefore, CRISPR/Cas is considered a revolutionary technology in plant biology. Here, we introduce the mechanism of the type II CRISPR/Cas called CRISPR/Cas9, update its recent advances in various applications in plants, and discuss its future prospects to provide an argument for its use in the study of medicinal plants.",
"title": ""
},
{
"docid": "7b95b771e6194efb2deee35cfc179040",
"text": "A Bayesian nonparametric model is a Bayesian model on an infinite-dimensional parameter space. The parameter space is typically chosen as the set of all possible solutions for a given learning problem. For example, in a regression problem the parameter space can be the set of continuous functions, and in a density estimation problem the space can consist of all densities. A Bayesian nonparametric model uses only a finite subset of the available parameter dimensions to explain a finite sample of observations, with the set of dimensions chosen depending on the sample, such that the effective complexity of the model (as measured by the number of dimensions used) adapts to the data. Classical adaptive problems, such as nonparametric estimation and model selection, can thus be formulated as Bayesian inference problems. Popular examples of Bayesian nonparametric models include Gaussian process regression, in which the correlation structure is refined with growing sample size, and Dirichlet process mixture models for clustering, which adapt the number of clusters to the complexity of the data. Bayesian nonparametric models have recently been applied to a variety of machine learning problems, including regression, classification, clustering, latent variable modeling, sequential modeling, image segmentation, source separation and grammar induction.",
"title": ""
},
{
"docid": "bbf764205f770481b787e76db5a3b614",
"text": "A∗ is a popular path-finding algorithm, but it can only be applied to those domains where a good heuristic function is known. Inspired by recent methods combining Deep Neural Networks (DNNs) and trees, this study demonstrates how to train a heuristic represented by a DNN and combine it with A∗ . This new algorithm which we call א∗ can be used efficiently in domains where the input to the heuristic could be processed by a neural network. We compare א∗ to N-Step Deep QLearning (DQN Mnih et al. 2013) in a driving simulation with pixel-based input, and demonstrate significantly better performance in this scenario.",
"title": ""
},
{
"docid": "a9346f8d40a8328e963774f2604da874",
"text": "Abstract-Sign language is a lingua among the speech and the hearing impaired community. It is hard for most people who are not familiar with sign language to communicate without an interpreter. Sign language recognition appertains to track and recognize the meaningful emotion of human made with fingers, hands, head, arms, face etc. The technique that has been proposed in this work, transcribes the gestures from a sign language to a spoken language which is easily understood by the hearing. The gestures that have been translated include alphabets, words from static images. This becomes more important for the people who completely rely on the gestural sign language for communication tries to communicate with a person who does not understand the sign language. We aim at representing features which will be learned by a technique known as convolutional neural networks (CNN), contains four types of layers: convolution layers, pooling/subsampling layers, nonlinear layers, and fully connected layers. The new representation is expected to capture various image features and complex non-linear feature interactions. A softmax layer will be used to recognize signs. Keywords-Convolutional Neural Networks, Softmax (key words) __________________________________________________*****_________________________________________________",
"title": ""
},
{
"docid": "c4bc03788ce4273a219809ad059edaf2",
"text": "In nature, many animals are able to jump, upright themselves after landing and jump again. This allows them to move in unstructured and rough terrain. As a further development of our previously presented 7 g jumping robot, we consider various mechanisms enabling it to recover and upright after landing and jump again. After a weighted evaluation of these different solutions, we present a spherical system with a mass of 9.8 g and a diameter of 12 cm that is able to jump, upright itself after landing and jump again. In order to do so autonomously, it has a control unit and sensors to detect its orientation and spring charging state. With its current configuration it can overcome obstacles of 76 cm at a take-off angle of 75°.",
"title": ""
},
{
"docid": "c898f6186ff15dff41dcb7b3376b975d",
"text": "The future grid is evolving into a smart distribution network that integrates multiple distributed energy resources ensuring at the same time reliable operation and increased power quality. In recent years, many research papers have addressed the voltage violation problems that arise from the high penetration of distributed generation. In view of the transition to active network management and the increase in the quantity of collected data, distributed control schemes have been proposed that use pervasive communications to deal with the complexity of smart grid. This paper reviews the recent publications on distributed and decentralized voltage control of smart distribution networks, summarizes their control models, and classifies the solution methodologies. Moreover, it comments on issues that should be addressed in the future and the perspectives of industry applications.",
"title": ""
},
{
"docid": "54e2dfd355e9e082d9a6f8c266c84360",
"text": "The wealth and value of organizations are increasingly based on intellectual capital. Although acquiring talented individuals and investing in employee learning adds value to the organization, reaping the benefits of intellectual capital involves translating the wisdom of employees into reusable and sustained actions. This requires a culture that creates employee commitment, encourages learning, fosters sharing, and involves employees in decision making. An infrastructure to recognize and embed promising and best practices through social networks, evidence-based practice, customization of innovations, and use of information technology results in increased productivity, stronger financial performance, better patient outcomes, and greater employee and customer satisfaction.",
"title": ""
},
{
"docid": "efe70da1a3118e26acf10aa480ad778d",
"text": "Background: Facebook (FB) is becoming an increasingly salient feature in peoples’ lives and has grown into a bastion in our current society with over 1 billion users worldwide –the majority of which are college students. However, recent studies conducted suggest that the use of Facebook may impacts individuals’ well being. Thus, this paper aimed to explore the effects of Facebook usage on adolescents’ emotional states of depression, anxiety, and stress. Method and Material: A cross sectional design was utilized in this investigation. The study population included 76 students enrolled in the Bachelor of Science in Nursing program from a government university in Samar, Philippines. Facebook Intensity Scale (FIS) and the Depression Anxiety and Stress Scale (DASS) were the primary instruments used in this study. Results: Findings indicated correlation coefficients of 0.11 (p=0.336), 0.07 (p=0.536), and 0.10 (p=0.377) between Facebook Intensity Scale (FIS) and Depression, Anxiety, and Stress scales in the DASS. Time spent on FBcorrelated significantly with depression (r=0.233, p=0.041) and anxiety (r=0.259, p=0.023). Similarly, the three emotional states (depression, anxiety, and stress) correlated significantly. Conclusions: Intensity of Facebook use is not directly related to negative emotional states. However, time spent on Facebooking increases depression and anxiety scores. Implications of the findings to the fields of counseling and psychology are discussed.",
"title": ""
},
{
"docid": "684555a1b5eb0370eebee8cbe73a82ff",
"text": "This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.",
"title": ""
},
{
"docid": "0cf5f7521cccd0757be3a50617cf2473",
"text": "In 1997, Moody and Wu presented recurrent reinforcement learning (RRL) as a viable machine learning method within algorithmic trading. Subsequent research has shown a degree of controversy with regards to the benefits of incorporating technical indicators in the recurrent reinforcement learning framework. In 1991, Nison introduced Japanese candlesticks to the global research community as an alternative to employing traditional indicators within the technical analysis of financial time series. The literature accumulated over the past two and a half decades of research contains conflicting results with regards to the utility of using Japanese candlestick patterns to exploit inefficiencies in financial time series. In this paper, we combine features based on Japanese candlesticks with recurrent reinforcement learning to produce a high-frequency algorithmic trading system for the E-mini S&P 500 index futures market. Our empirical study shows a statistically significant increase in both return and Sharpe ratio compared to relevant benchmarks, suggesting the existence of exploitable spatio-temporal structure in Japanese candlestick patterns and the ability of recurrent reinforcement learning to detect and take advantage of this structure in a high-frequency equity index futures trading environment.",
"title": ""
},
{
"docid": "4c711149abc3af05a8e55e52eefddd97",
"text": "Scanning a halftone image introduces halftone artifacts, known as Moire patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating half toning arc easily identifiable in the frequency domain. This paper proposes a method for de screening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to half toning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows arc filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts.",
"title": ""
},
{
"docid": "ea6eecdaed8e76c28071ad1d9c1c39f9",
"text": "When it comes to taking the public transportation, time and patience are of essence. In other words, many people using public transport buses have experienced time loss because of waiting at the bus stops. In this paper, we proposed smart bus tracking system that any passenger with a smart phone or mobile device with the QR (Quick Response) code reader can scan QR codes placed at bus stops to view estimated bus arrival times, buses' current locations, and bus routes on a map. Anyone can access these maps and have the option to sign up to receive free alerts about expected bus arrival times for the interested buses and related routes via SMS and e-mails. We used C4.5 (a statistical classifier) algorithm for the estimation of bus arrival times to minimize the passengers waiting time. GPS (Global Positioning System) and Google Maps are used for navigation and display services, respectively.",
"title": ""
},
{
"docid": "48d2f38037b0cab83ca4d57bf19ba903",
"text": "The term sentiment analysis can be used to refer to many different, but related, problems. Most commonly, it is used to refer to the task of automatically determining the valence or polarity of a piece of text, whether it is positive, negative, or neutral. However, more generally, it refers to determining one’s attitude towards a particular target or topic. Here, attitude can mean an evaluative judgment, such as positive or negative, or an emotional or affectual attitude such as frustration, joy, anger, sadness, excitement, and so on. Note that some authors consider feelings to be the general category that includes attitude, emotions, moods, and other affectual states. In this chapter, we use ‘sentiment analysis’ to refer to the task of automatically determining feelings from text, in other words, automatically determining valence, emotions, and other affectual states from text. Osgood, Suci, and Tannenbaum (1957) showed that the three most prominent dimensions of meaning are evaluation (good–bad), potency (strong–weak), and activity (active– passive). Evaluativeness is roughly the same dimension as valence (positive–negative). Russell (1980) developed a circumplex model of affect characterized by two primary dimensions: valence and arousal (degree of reactivity to stimulus). Thus, it is not surprising that large amounts of work in sentiment analysis are focused on determining valence. (See survey articles by Pang and Lee (2008), Liu and Zhang (2012), and Liu (2015).) However, there is some work on automatically detecting arousal (Thelwall, Buckley, Paltoglou, Cai, & Kappas, 2010; Kiritchenko, Zhu, & Mohammad, 2014b; Mohammad, Kiritchenko, & Zhu, 2013a) and growing interest in detecting emotions such as anger, frustration, sadness, and optimism in text (Mohammad, 2012; Bellegarda, 2010; Tokuhisa, Inui, & Matsumoto, 2008; Strapparava & Mihalcea, 2007; John, Boucouvalas, & Xu, 2006; Mihalcea & Liu, 2006; Genereux & Evans, 2006; Ma, Prendinger, & Ishizuka, 2005; Holzman & Pottenger, 2003; Boucouvalas, 2002; Zhe & Boucouvalas, 2002). Further, massive amounts of data emanating from social media have led to significant interest in analyzing blog posts, tweets, instant messages, customer reviews, and Facebook posts for both valence (Kiritchenko et al., 2014b; Kiritchenko, Zhu, Cherry, & Mohammad, 2014a; Mohammad et al., 2013a; Aisopos, Papadakis, Tserpes, & Varvarigou, 2012; Bakliwal, Arora, Madhappan, Kapre, Singh, & Varma, 2012; Agarwal, Xie, Vovsha, Rambow, & Passonneau, 2011; Thelwall, Buckley, & Paltoglou, 2011; Brody & Diakopoulos, 2011; Pak & Paroubek, 2010) and emotions (Hasan, Rundensteiner, & Agu, 2014; Mohammad & Kiritchenko, 2014; Mohammad, Zhu, Kiritchenko, & Martin, 2014; Choudhury, Counts, & Gamon, 2012; Mohammad, 2012a; Wang, Chen, Thirunarayan, & Sheth, 2012; Tumasjan, Sprenger, Sandner, & Welpe, 2010b; Kim, Gilbert, Edwards, &",
"title": ""
},
{
"docid": "6d80c1d1435f016b124b2d61ef4437a5",
"text": "Recent high profile developments of autonomous learning thermostats by companies such as Nest Labs and Honeywell have brought to the fore the possibility of ever greater numbers of intelligent devices permeating our homes and working environments into the future. However, the specific learning approaches and methodologies utilised by these devices have never been made public. In fact little information is known as to the specifics of how these devices operate and learn about their environments or the users who use them. This paper proposes a suitable learning architecture for such an intelligent thermostat in the hope that it will benefit further investigation by the research community. Our architecture comprises a number of different learning methods each of which contributes to create a complete autonomous thermostat capable of controlling a HVAC system. A novel state action space formalism is proposed to enable a Reinforcement Learning agent to successfully control the HVAC system by optimising both occupant comfort and energy costs. Our results show that the learning thermostat can achieve cost savings of 10% over a programmable thermostat, whilst maintaining high occupant comfort standards.",
"title": ""
},
{
"docid": "e4546038f0102d0faac18ac96e50793d",
"text": "Ontologies have been increasingly used as a core representation formalism in medical information systems. Diagnosis is one of the highly relevant reasoning problems in this domain. In recent years this problem has captured attention also in the description logics community and various proposals on formalising abductive reasoning problems and their computational support appeared. In this paper, we focus on a practical diagnostic problem from a medical domain – the diagnosis of diabetes mellitus – and we try to formalize it in DL in such a way that the expected diagnoses are abductively derived. Our aim in this work is to analyze abductive reasoning in DL from a practical perspective, considering more complex cases than trivial examples typically considered by the theoryor algorithm-centered literature, and to evaluate the expressivity as well as the particular formulation of the abductive reasoning problem needed to capture medical diagnosis.",
"title": ""
},
{
"docid": "ed0444685c9a629c7d1fda7c4912fd55",
"text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.",
"title": ""
},
{
"docid": "60da71841669948e0a57ba4673693791",
"text": "AIMS\nStiffening of the large arteries is a common feature of aging and is exacerbated by a number of disorders such as hypertension, diabetes, and renal disease. Arterial stiffening is recognized as an important and independent risk factor for cardiovascular events. This article will provide a comprehensive review of the recent advance on assessment of arterial stiffness as a translational medicine biomarker for cardiovascular risk.\n\n\nDISCUSSIONS\nThe key topics related to the mechanisms of arterial stiffness, the methodologies commonly used to measure arterial stiffness, and the potential therapeutic strategies are discussed. A number of factors are associated with arterial stiffness and may even contribute to it, including endothelial dysfunction, altered vascular smooth muscle cell (SMC) function, vascular inflammation, and genetic determinants, which overlap in a large degree with atherosclerosis. Arterial stiffness is represented by biomarkers that can be measured noninvasively in large populations. The most commonly used methodologies include pulse wave velocity (PWV), relating change in vessel diameter (or area) to distending pressure, arterial pulse waveform analysis, and ambulatory arterial stiffness index (AASI). The advantages and limitations of these key methodologies for monitoring arterial stiffness are reviewed in this article. In addition, the potential utility of arterial stiffness as a translational medicine surrogate biomarker for evaluation of new potentially vascular protective drugs is evaluated.\n\n\nCONCLUSIONS\nAssessment of arterial stiffness is a sensitive and useful biomarker of cardiovascular risk because of its underlying pathophysiological mechanisms. PWV is an emerging biomarker useful for reflecting risk stratification of patients and for assessing pharmacodynamic effects and efficacy in clinical studies.",
"title": ""
},
{
"docid": "fc2a0f6979c2520cee8f6e75c39790a8",
"text": "In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"title": ""
},
{
"docid": "92d5ebd49670681a5d43ba90731ae013",
"text": "Prior work has shown that return oriented programming (ROP) can be used to bypass W⊕X, a software defense that stops shellcode, by reusing instructions from large libraries such as libc. Modern operating systems have since enabled address randomization (ASLR), which randomizes the location of libc, making these techniques unusable in practice. However, modern ASLR implementations leave smaller amounts of executable code unrandomized and it has been unclear whether an attacker can use these small code fragments to construct payloads in the general case. In this paper, we show defenses as currently deployed can be bypassed with new techniques for automatically creating ROP payloads from small amounts of unrandomized code. We propose using semantic program verification techniques for identifying the functionality of gadgets, and design a ROP compiler that is resistant to missing gadget types. To demonstrate our techniques, we build Q, an end-to-end system that automatically generates ROP payloads for a given binary. Q can produce payloads for 80% of Linux /usr/bin programs larger than 20KB. We also show that Q can automatically perform exploit hardening: given an exploit that crashes with defenses on, Q outputs an exploit that bypasses both W⊕X and ASLR. We show that Q can harden nine realworld Linux and Windows exploits, enabling an attacker to automatically bypass defenses as deployed by industry for those programs.",
"title": ""
}
] |
scidocsrr
|
73a4d1682aba91604a801387f374f108
|
Support Vector Ordinal Regression
|
[
{
"docid": "55f68a0bb97f11b579a33881452a9d7c",
"text": "Machine learning methods for classification problems commonly assume that the class values are unordered. However, in many practical applications the class values do exhibit a natural order—for example, when learning how to grade. The standard approach to ordinal classification converts the class value into a numeric quantity and applies a regression learner to the transformed data, translating the output back into a discrete class value in a post-processing step. A disadvantage of this method is that it can only be applied in conjunction with a regression scheme. In this paper we present a simple method that enables standard classification algorithms to make use of ordering information in class attributes. By applying it in conjunction with a decision tree learner we show that it outperforms the naive approach, which treats the class values as an unordered set. Compared to special-purpose algorithms for ordinal classification our method has the advantage that it can be applied without any modification to the underlying learning scheme.",
"title": ""
}
] |
[
{
"docid": "b5ebb3d6abacc832fdb6d622fd63dad9",
"text": "With the rise of location-aware IoT devices, there is an increased desire to process data streams in a real-time manner. Responding to such streams may require processing data from multiple streams to inform decisions. There are many uses cases for putting the location data from the sensors or an analytic derivative on a map for a live view of sensors or other assets. Here we describe an architecture which relies solely on free and open-source components to provide streaming spatio-temporal event processing, analysis, and near-real-time visualization.",
"title": ""
},
{
"docid": "77f0b791691135b90cf231d6061a0a5f",
"text": "The hyperlink structure of Wikipedia forms a rich semantic network connecting entities and concepts, enabling it as a valuable source for knowledge harvesting. Wikipedia, as crowd-sourced data, faces various data quality issues which significantly impacts knowledge systems depending on it as the information source. One such issue occurs when an anchor text in a Wikipage links to a wrong Wikipage, causing the error link problem. While much of previous work has focused on leveraging Wikipedia for entity linking, little has been done to detect error links.\n In this paper, we address the error link problem, and propose algorithms to detect and correct error links. We introduce an efficient method to generate candidate error links based on iterative ranking in an Anchor Text Semantic Network. This greatly reduces the problem space. A more accurate pairwise learning model was used to detect error links from the reduced candidate error link set, while suggesting correct links in the same time. This approach is effective when data sparsity is a challenging issue. The experiments on both English and Chinese Wikipedia illustrate the effectiveness of our approach. We also provide a preliminary analysis on possible causes of error links in English and Chinese Wikipedia.",
"title": ""
},
{
"docid": "67da4c8ba04d3911118147b829ba9c50",
"text": "A methodology for the development of a fuzzy expert system (FES) with application to earthquake prediction is presented. The idea is to reproduce the performance of a human expert in earthquake prediction. To do this, at the first step, rules provided by the human expert are used to generate a fuzzy rule base. These rules are then fed into an inference engine to produce a fuzzy inference system (FIS) and to infer the results. In this paper, we have used a Sugeno type fuzzy inference system to build the FES. At the next step, the adaptive network-based fuzzy inference system (ANFIS) is used to refine the FES parameters and improve its performance. The proposed framework is then employed to attain the performance of a human expert used to predict earthquakes in the Zagros area based on the idea of coupled earthquakes. While the prediction results are promising in parts of the testing set, the general performance indicates that prediction methodology based on coupled earthquakes needs more investigation and more complicated reasoning procedure to yield satisfactory predictions.",
"title": ""
},
{
"docid": "12ee117f58c5bd5b6794de581bfcacdb",
"text": "The visualization of complex network traffic involving a large number of communication devices is a common yet challenging task. Traditional layout methods create the network graph with overwhelming visual clutter, which hinders the network understanding and traffic analysis tasks. The existing graph simplification algorithms (e.g. community-based clustering) can effectively reduce the visual complexity, but lead to less meaningful traffic representations. In this paper, we introduce a new method to the traffic monitoring and anomaly analysis of large networks, namely Structural Equivalence Grouping (SEG). Based on the intrinsic nature of the computer network traffic, SEG condenses the graph by more than 20 times while preserving the critical connectivity information. Computationally, SEG has a linear time complexity and supports undirected, directed and weighted traffic graphs up to a million nodes. We have built a Network Security and Anomaly Visualization (NSAV) tool based on SEG and conducted case studies in several real-world scenarios to show the effectiveness of our technique.",
"title": ""
},
{
"docid": "d16961dda88ba69040bb5e6cfed70781",
"text": "Agrarian sector in India is facing rigorous problem to maximize the crop productivity. More than 60 percent of the crop still depends on monsoon rainfall. Recent developments in Information Technology for agriculture field has become an interesting research area to predict the crop yield. The problem of yield prediction is a major problem that remains to be solved based on available data. Data Mining techniques are the better choices for this purpose. Different Data Mining techniques are used and evaluated in agriculture for estimating the future year's crop production. This paper presents a brief analysis of crop yield prediction using Multiple Linear Regression (MLR) technique and Density based clustering technique for the selected region i.e. East Godavari district of Andhra Pradesh in India.",
"title": ""
},
{
"docid": "b4d27850fecbc5d2154fdf1ac5e03f6a",
"text": "Measuring free-living peoples’ food intake represents methodological and technical challenges. The Remote Food Photography Method (RFPM) involves participants capturing pictures of their food selection and plate waste and sending these pictures to the research center via a wireless network, where they are analyzed by Registered Dietitians to estimate food intake. Initial tests indicate that the RFPM is reliable and valid, though the efficiency of the method is limited due to the reliance on human raters to estimate food intake. Herein, we describe the development of a semi-automated computer imaging application to estimate food intake based on pictures captured by participants.",
"title": ""
},
{
"docid": "8da9477e774902d4511d51a9ddb8b74b",
"text": "In modern system-on-chip architectures, specialized accelerators are increasingly used to improve performance and energy efficiency. The growing complexity of these systems requires the use of system-level design methodologies featuring high-level synthesis (HLS) for generating these components efficiently. Existing HLS tools, however, have limited support for the system-level optimization of memory elements, which typically occupy most of the accelerator area. We present a complete methodology for designing the private local memories (PLMs) of multiple accelerators. Based on the memory requirements of each accelerator, our methodology automatically determines an area-efficient architecture for the PLMs to guarantee performance and reduce the memory cost based on technology-related information. We implemented a prototype tool, called Mnemosyne, that embodies our methodology within a commercial HLS flow. We designed 13 complex accelerators for selected applications from two recently-released benchmark suites (Perfect and CortexSuite). With our approach we are able to reduce the memory cost of single accelerators by up to 45%. Moreover, when reusing memory IPs across accelerators, we achieve area savings that range between 17% and 55% compared to the case where the PLMs are designed separately.",
"title": ""
},
{
"docid": "18377326a8c12b527c641173da866284",
"text": "This paper presents a 4-channel analog front-end (AFE) for Electromyogram (EMG) acquisition systems. Each input channel consists of a chopper-stabilized instrumentation amplifier (IA) and a low-pass filer (LPF). A 15-bit analog-to-digital converter (ADC) with a buffer amplifier is shared with four input channels through multiplexer. An incremental ADC with a 1.5-bit second-order feed-forward topology is employed to achieve 15-bit resolution. The prototype AFE is fabricated in a 0.18 μm CMOS process with an active die area of 1.5 mm2. It achieves 3.2 μVrms input referred noise with a gain of 40 dB and a cutoff frequency of 500 Hz for LPF while consuming 3.713 mW from a 1.8V supply.",
"title": ""
},
{
"docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7",
"text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].",
"title": ""
},
{
"docid": "c935ba16ca618659c8fcaa432425db22",
"text": "Dynamic Voltage/Frequency Scaling (DVFS) is a useful tool for improving system energy efficiency, especially in multi-core chips where energy is more of a limiting factor. Per-core DVFS, where cores can independently scale their voltages and frequencies, is particularly effective. We present a DVFS policy using machine learning, which learns the best frequency choices for a machine as a decision tree.\n Machine learning is used to predict the frequency which will minimize the expected energy per user-instruction (epui) or energy per (user-instruction)2 (epui2). While each core independently sets its frequency and voltage, a core is sensitive to other cores' frequency settings. Also, we examine the viability of using only partial training to train our policy, rather than full profiling for each program.\n We evaluate our policy on a 16-core machine running multiprogrammed, multithreaded benchmarks from the PARSEC benchmark suite against a baseline fixed frequency as well as a recently-proposed greedy policy. For 1ms DVFS intervals, our technique improves system epui2 by 14.4% over the baseline no-DVFS policy and 11.3% on average over the greedy policy.",
"title": ""
},
{
"docid": "421516992f06a42aba5e6d312ab342bf",
"text": "We present a fully unsupervised method for automated construction of WordNets based upon recent advances in distributional representations of sentences and word-senses combined with readily available machine translation tools. The approach requires very few linguistic resources and is thus extensible to multiple target languages. To evaluate our method we construct two 600-word test sets for word-to-synset matching in French and Russian using native speakers and evaluate the performance of our method along with several other recent approaches. Our method exceeds the best language-specific and multi-lingual automated WordNets in F-score for both languages. The databases we construct for French and Russian, both languages without large publicly available manually constructed WordNets, will be publicly released along with the test sets.",
"title": ""
},
{
"docid": "f31f45176e89163d27b065a52b429973",
"text": "Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.",
"title": ""
},
{
"docid": "8f5028ec9b8e691a21449eef56dc267e",
"text": "It can be shown that by replacing the sigmoid activation function often used in neural networks with an exponential function, a neural network can be formed which computes nonlinear decision boundaries. This technique yields decision surfaces which approach the Bayes optimal under certain conditions. There is a continuous control of the linearity of the decision boundaries, from linear for small training sets to any degree of nonlinearity justified by larger training sets. A four-layer neural network of the type proposed can map any input pattern to any number of classifications. The input variables can be either continuous or binary. Modification of the decision boundaries based on new data can be accomplished in real time simply by defining a set of weights equal to the new training vector. The decision boundaries can be implemented using analog 'neurons', which operate entirely in parallel. The organization proposed takes into account the projected pin limitations of neural-net chips of the near future. By a change in architecture, these same components could be used as associative memories, to compute nonlinear multivariate regression surfaces, or to compute a posteriori probabilities of an event.<<ETX>>",
"title": ""
},
{
"docid": "0e8ab182a2ad85d19d9384de0ac5f359",
"text": "Nowadays, many applications need data modeling facilities for the description of complex objects with spatial and/or temporal facilities. Responses to such requirements may be found in Geographic Information Systems (GIS), in some DBMS, or in the research literature. However, most f existing models cover only partly the requirements (they address either spatial or temporal modeling), and most are at the logical level, h nce not well suited for database design. This paper proposes a spatiotemporal modeling approach at the conceptual level, called MADS. The proposal stems from the identification of the criteria to be met for a conceptual model. It is advocated that orthogonality is the key issue for achieving a powerful and intuitive conceptual model. Thus, the proposal focuses on highlighting similarities in the modeling of space and time, which enhance readability and understandability of the model.",
"title": ""
},
{
"docid": "b02d9621ee919bccde66418e0681d1e6",
"text": "A great deal of work has been done on the evaluation of information retrieval systems for alphanumeric data. The same thing can not be said about the newly emerging multimedia and image database systems. One of the central concerns in these systems is the automatic characterization of image content and retrieval of images based on similarity of image content. In this paper, we discuss effectiveness of several shape measures for content based similarity retrieval of images. The different shape measures we have implemented include outline based features (chain code based string features, Fourier descriptors, UNL Fourier features), region based features (invariant moments, Zemike moments, pseudoZemike moments), and combined features (invariant moments & Fourier descriptors, invariant moments & UNL Fourier features). Given an image, all these shape feature measures (vectors) are computed automatically, and the feature vector can either be used for the retrieval purpose or can be stored in the database for future queries. We have tested all of the above shape features for image retrieval on a database of 500 trademark images. The average retrieval efficiency values computed over a set of fifteen representative queries for all the methods is presented. The output of a sample shape similarity query using all the features is also shown.",
"title": ""
},
{
"docid": "68cb8836a07846d19118d21383f6361a",
"text": "Background: Dental rehabilitation of partially or totally edentulous patients with oral implants has become a routine treatment modality in the last decades, with reliable long-term results. However, unfavorable local conditions of the alveolar ridge, due to atrophy, periodontal disease, and trauma sequelae may provide insufficient bone volume or unfavorable vertical, horizontal, and sagittal intermaxillary relationships, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint. The aim of the current review is to discuss the different strategies for reconstruction of the alveolar ridge defect for implant placement. Study design: The study design includes a literature review of the articles that address the association between Reconstruction of Mandibular Alveolar Ridge Defects and Implant Placement. Results: Yet, despite an increasing number of publications related to the correction of deficient alveolar ridges, much controversy still exists concerning which is the more suitable and reliable technique. This is often because the publications are of insufficient methodological quality (inadequate sample size, lack of well-defined exclusion and inclusion criteria, insufficient follow-up, lack of well-defined success criteria, etc.). Conclusion: On the basis of available data it is difficult to conclude that a particular surgical procedure offered better outcome as compared to another. Hence the practical use of the available bone augmentation procedures for dental implants depends on the clinician’s preference in general and the clinical findings in the patient in particular. Surgical techniques that reduce trauma, preserve and augment the alveolar ridge represent key areas in the goal to optimize implant results.",
"title": ""
},
{
"docid": "af3cc5fc9cf58048f9805923b45305d6",
"text": "Spell checkers are one of the most widely recognized and heavily employed features of word processing applications in existence today. This remains true despite the many problems inherent in the spell checking methods employed by all modern spell checkers. In this paper we present a proof-ofconcept spell checking system that is able to intrinsically avoid many of these problems. In particular, it is the actual corrections performed by the typist that provides the basis for error detection. These corrections are used to train a feed-forward neural network so that if the same error is remade, the network can flag the offending word as a possible error. Since these corrections are the observations of a single typist’s behavior, a spell checker employing this system is essentially specific to the typist that made the corrections. A discussion of the benefits and deficits of the system is presented with the conclusion that the system is most effective as a supplement to current spell checking methods.",
"title": ""
},
{
"docid": "39d3f1a5d40325bdc4bca9ee50241c9e",
"text": "This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems.",
"title": ""
},
{
"docid": "34da6d6beeeb2f8efbdf79c3d728f0b7",
"text": "We present a fully planar, easy to fabricate antenna for millimeter-wave communications, based on a new substrate-integrated waveguide (SIW). The SIW itself is entirely planar, since it is designed using series of side-by-side complementary split-ring resonators (CSRR), instead of vias, with the CSRR being etched on top and bottom metal ground surfaces that cover the dielectric substrate. This metamaterial-inspired structure provides a single-negative effective material parameter behavior that blocks wave propagation to the perpendicular direction. Hence, two parallel structures of this kind can be used to design an SIW, with propagation losses comparable to the conventional one. The antenna structure is consequently designed, by the use of proper cavities within the substrate and radiating slots on the ground structure and provides an easy to fabricate alternative for millimeter-wave and 5G communications.",
"title": ""
},
{
"docid": "d40a55317d8cdebfcd567ea11ad0960f",
"text": "This study examined the effects of self-presentation goals on the amount and type of verbal deception used by participants in same-gender and mixed-gender dyads. Participants were asked to engage in a conversation that was secretly videotaped. Self-presentational goal was manipulated, where one member of the dyad (the self-presenter) was told to either appear (a) likable, (b) competent, or (c) was told to simply get to know his or her partner (control condition). After the conversation, self-presenters were asked to review a video recording of the interaction and identify the instances in which they had deceived the other person. Overall, participants told more lies when they had a goal to appear likable or competent compared to participants in the control condition, and the content of the lies varied according to self-presentation goal. In addition, lies told by men and women differed in content, although not in quantity.",
"title": ""
}
] |
scidocsrr
|
da7b071b1d73ee2a3a4d5b34ec12408d
|
Beyond natives and immigrants: exploring types of net generation students
|
[
{
"docid": "cf5128cb4259ea87027ddd00189dc931",
"text": "This paper interrogates the currently pervasive discourse of the ‘net generation’ finding the concept of the ‘digital native’ especially problematic, both empirically and conceptually. We draw on a research project of South African higher education students’ access to and use of Information and Communication Technologies (ICTs) to show that age is not a determining factor in students’ digital lives; rather, their familiarity and experience using ICTs is more relevant. We also demonstrate that the notion of a generation of ‘digital natives’ is inaccurate: those with such attributes are effectively a digital elite. Instead of a new net generation growing up to replace an older analogue generation, there is a deepening digital divide in South Africa characterized not by age but by access and opportunity; indeed, digital apartheid is alive and well. We suggest that the possibility for digital democracy does exist in the form of a mobile society which is not age specific, and which is ubiquitous. Finally, we propose redefining the concepts ‘digital’, ‘net’, ‘native’, and ‘generation’ in favour of reclaiming the term ‘digitizen’.",
"title": ""
}
] |
[
{
"docid": "4a240b05fbb665596115841d238a483b",
"text": "BACKGROUND\nAttachment theory is one of the most important achievements of contemporary psychology. Role of medical students in the community health is important, so we need to know about the situation of happiness and attachment style in these students.\n\n\nOBJECTIVES\nThis study was aimed to assess the relationship between medical students' attachment styles and demographic characteristics.\n\n\nMATERIALS AND METHODS\nThis cross-sectional study was conducted on randomly selected students of Medical Sciences in Kurdistan University, in 2012. To collect data, Hazan and Shaver's attachment style measure and the Oxford Happiness Questionnaire were used. The results were analyzed using the SPSS software version 16 (IBM, Chicago IL, USA) and statistical analysis was performed via t-test, Chi-square test, and multiple regression tests.\n\n\nRESULTS\nSecure attachment style was the most common attachment style and the least common was ambivalent attachment style. Avoidant attachment style was more common among single persons than married people (P = 0.03). No significant relationship was observed between attachment style and gender and grade point average of the studied people. The mean happiness score of students was 62.71. In multivariate analysis, the variables of secure attachment style (P = 0.001), male gender (P = 0.005), and scholar achievement (P = 0.047) were associated with higher happiness score.\n\n\nCONCLUSION\nThe most common attachment style was secure attachment style, which can be a positive prognostic factor in medical students, helping them to manage stress. Higher frequency of avoidant attachment style among single persons, compared with married people, is mainly due to their negative attitude toward others and failure to establish and maintain relationships with others.",
"title": ""
},
{
"docid": "9b54c1afe7b7324aa61fe4c2d1a49342",
"text": "This work presents a pass-type ultrawideband power detector MMICs designed for operation from 10 MHz to 50 GHz in a wide dynamic range from -40 dBm to +25 dBm which were fabricated using GaAs zero bias diode process. Directional and non-directional detector designes are reviwed. For good wideband matching with transmission line, bonding wires parameters were taken into account at the stage of MMIC design. Result of this work includes on-wafer measurements of MMICs S-parameters and transfer characteristics.",
"title": ""
},
{
"docid": "85221954ced857c449acab8ee5cf801e",
"text": "IMSI Catchers are used in mobile networks to identify and eavesdrop on phones. When, the number of vendors increased and prices dropped, the device became available to much larger audiences. Self-made devices based on open source software are available for about US$ 1,500.\n In this paper, we identify and describe multiple methods of detecting artifacts in the mobile network produced by such devices. We present two independent novel implementations of an IMSI Catcher Catcher (ICC) to detect this threat against everyone's privacy. The first one employs a network of stationary (sICC) measurement units installed in a geographical area and constantly scanning all frequency bands for cell announcements and fingerprinting the cell network parameters. These rooftop-mounted devices can cover large areas. The second implementation is an app for standard consumer grade mobile phones (mICC), without the need to root or jailbreak them. Its core principle is based upon geographical network topology correlation, facilitating the ubiquitous built-in GPS receiver in today's phones and a network cell capabilities fingerprinting technique. The latter works for the vicinity of the phone by first learning the cell landscape and than matching it against the learned data. We implemented and evaluated both solutions for digital self-defense and deployed several of the stationary units for a long term field-test. Finally, we describe how to detect recently published denial of service attacks.",
"title": ""
},
{
"docid": "45c515da4f8e9c383f6d4e0fa6e09192",
"text": "In this paper, we demonstrate our Img2UML system tool. This system tool eliminates the gap between pixel-based diagram and engineering model, that it supports the extraction of the UML class model from images and produces an XMI file of the UML model. In addition to this, Img2UML offers a repository of UML class models of images that have been collected from the Internet. This project has both industrial and academic aims: for industry, this tool proposals a method that enables the updating of software design documentation (that typically contains UML images). For academia, this system unlocks a corpus of UML models that are publicly available, but not easily analyzable for scientific studies.",
"title": ""
},
{
"docid": "71b59076bf36de415c5cf6b86cec165f",
"text": "Most existing structure from motion (SFM) approaches for unordered images cannot handle multiple instances of the same structure in the scene. When image pairs containing different instances are matched based on visual similarity, the pairwise geometric relations as well as the correspondences inferred from such pairs are erroneous, which can lead to catastrophic failures in the reconstruction. In this paper, we investigate the geometric ambiguities caused by the presence of repeated or duplicate structures and show that to disambiguate between multiple hypotheses requires more than pure geometric reasoning. We couple an expectation maximization (EM)-based algorithm that estimates camera poses and identifies the false match-pairs with an efficient sampling method to discover plausible data association hypotheses. The sampling method is informed by geometric and image-based cues. Our algorithm usually recovers the correct data association, even in the presence of large numbers of false pairwise matches.",
"title": ""
},
{
"docid": "d338c807948016bf978aa7a03841f292",
"text": "Emotions accompany everyone in the daily life, playing a key role in non-verbal communication, and they are essential to the understanding of human behavior. Emotion recognition could be done from the text, speech, facial expression or gesture. In this paper, we concentrate on recognition of “inner” emotions from electroencephalogram (EEG) signals as humans could control their facial expressions or vocal intonation. The need and importance of the automatic emotion recognition from EEG signals has grown with increasing role of brain computer interface applications and development of new forms of human-centric and human-driven interaction with digital media. We propose fractal dimension based algorithm of quantification of basic emotions and describe its implementation as a feedback in 3D virtual environments. The user emotions are recognized and visualized in real time on his/her avatar adding one more so-called “emotion dimension” to human computer interfaces.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "c481baeab2091672c044c889b1179b1f",
"text": "Our research is based on an innovative approach that integrates computational thinking and creative thinking in CS1 to improve student learning performance. Referencing Epstein's Generativity Theory, we designed and deployed a suite of creative thinking exercises with linkages to concepts in computer science and computational thinking, with the premise that students can leverage their creative thinking skills to \"unlock\" their understanding of computational thinking. In this paper, we focus on our study on differential impacts of the exercises on different student populations. For all students there was a linear \"dosage effect\" where completion of each additional exercise increased retention of course content. The impacts on course grades, however, were more nuanced. CS majors had a consistent increase for each exercise, while non-majors benefited more from completing at least three exercises. It was also important for freshmen to complete all four exercises. We did find differences between women and men but cannot draw conclusions.",
"title": ""
},
{
"docid": "57df6e1fcd71458e774a5492e8a370de",
"text": "Due to the phenomenal growth of online product reviews, sentiment analysis (SA) has gained huge attention, for example, by online service providers. A number of benchmark datasets for a wide range of domains have been made available for sentiment analysis, especially in resource-rich languages. In this paper we assess the challenges of SA in Hindi by providing a benchmark setup, where we create an annotated dataset of high quality, build machine learning models for sentiment analysis in order to show the effective usage of the dataset, and finally make the resource available to the community for further advancement of research. The dataset comprises of Hindi product reviews crawled from various online sources. Each sentence of the review is annotated with aspect term and its associated sentiment. As classification algorithms we use Conditional Random Filed (CRF) and Support Vector Machine (SVM) for aspect term extraction and sentiment analysis, respectively. Evaluation results show the average F-measure of 41.07% for aspect term extraction and accuracy of 54.05% for sentiment classification.",
"title": ""
},
{
"docid": "fcca051539729b005271e4f96563538d",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "957a3970611470b611c024ed3b558115",
"text": "SHARE is a unique panel database of micro data on health, socio-economic status and social and family networks covering most of the European Union and Israel. To date, SHARE has collected three panel waves (2004, 2006, 2010) of current living circumstances and retrospective life histories (2008, SHARELIFE); 6 additional waves are planned until 2024. The more than 150 000 interviews give a broad picture of life after the age of 50 years, measuring physical and mental health, economic and non-economic activities, income and wealth, transfers of time and money within and outside the family as well as life satisfaction and well-being. The data are available to the scientific community free of charge at www.share-project.org after registration. SHARE is harmonized with the US Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA) and has become a role model for several ageing surveys worldwide. SHARE's scientific power is based on its panel design that grasps the dynamic character of the ageing process, its multidisciplinary approach that delivers the full picture of individual and societal ageing, and its cross-nationally ex-ante harmonized design that permits international comparisons of health, economic and social outcomes in Europe and the USA.",
"title": ""
},
{
"docid": "87199b3e7def1db3159dc6b5989638aa",
"text": "We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"title": ""
},
{
"docid": "1dfbe95e53aeae347c2b42ef297a859f",
"text": "With the rapid growth of knowledge bases (KBs) on the web, how to take full advantage of them becomes increasingly important. Question answering over knowledge base (KB-QA) is one of the promising approaches to access the substantial knowledge. Meanwhile, as the neural networkbased (NN-based) methods develop, NNbased KB-QA has already achieved impressive results. However, previous work did not put more emphasis on question representation, and the question is converted into a fixed vector regardless of its candidate answers. This simple representation strategy is not easy to express the proper information in the question. Hence, we present an end-to-end neural network model to represent the questions and their corresponding scores dynamically according to the various candidate answer aspects via cross-attention mechanism. In addition, we leverage the global knowledge inside the underlying KB, aiming at integrating the rich KB information into the representation of the answers. As a result, it could alleviates the out-of-vocabulary (OOV) problem, which helps the crossattention model to represent the question more precisely. The experimental results on WebQuestions demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "cae4703a50910c7718284c6f8230a4bc",
"text": "Autonomous helicopter flight is widely regarded to be a highly challenging control problem. Despite this fact, human experts can reliably fly helicopters through a wide range of maneuvers, including aerobatic maneuvers at the edge of the helicopter’s capabilities. We present apprenticeship learning algorithms, which leverage expert demonstrations to efficiently learn good controllers for tasks being demonstrated by an expert. These apprenticeship learning algorithms have enabled us to significantly extend the state of the art in autonomous helicopter aerobatics. Our experimental results include the first autonomous execution of a wide range of maneuvers, including but not limited to in-place flips, in-place rolls, loops and hurricanes, and even auto-rotation landings, chaos and tic-tocs, which only exceptional human pilots can perform. Our results also include complete airshows, which require autonomous transitions between many of these maneuvers. Our controllers perform as well as, and often even better than, our expert pilot.",
"title": ""
},
{
"docid": "4ec6229ae75b13bbcc429f07eda0fb4a",
"text": "Face detection is a well-explored problem. Many challenges on face detectors like extreme pose, illumination, low resolution and small scales are studied in the previous work. However, previous proposed models are mostly trained and tested on good-quality images which are not always the case for practical applications like surveillance systems. In this paper, we first review the current state-of-the-art face detectors and their performance on benchmark dataset FDDB, and compare the design protocols of the algorithms. Secondly, we investigate their performance degradation while testing on low-quality images with different levels of blur, noise, and contrast. Our results demonstrate that both hand-crafted and deep-learning based face detectors are not robust enough for low-quality images. It inspires researchers to produce more robust design for face detection in the wild.",
"title": ""
},
{
"docid": "64f4c53592f185020bece88d4adf3ea4",
"text": "Due to the well-known limitations of diffusion tensor imaging, high angular resolution diffusion imaging (HARDI) is used to characterize non-Gaussian diffusion processes. One approach to analyzing HARDI data is to model the apparent diffusion coefficient (ADC) with higher order diffusion tensors. The diffusivity function is positive semidefinite. In the literature, some methods have been proposed to preserve positive semidefiniteness of second order and fourth order diffusion tensors. None of them can work for arbitrarily high order diffusion tensors. In this paper, we propose a comprehensive model to approximate the ADC profile by a positive semidefinite diffusion tensor of either second or higher order. We call this the positive semidefinite diffusion tensor (PSDT) model. PSDT is a convex optimization problem with a convex quadratic objective function constrained by the nonnegativity requirement on the smallest Z-eigenvalue of the diffusivity function. The smallest Z-eigenvalue is a computable measure of the extent of positive definiteness of the diffusivity function. We also propose some other invariants for the ADC profile analysis. Experiment results show that higher order tensors could improve the estimation of anisotropic diffusion and that the PSDT model can depict the characterization of diffusion anisotropy which is consistent with known neuroanatomy.",
"title": ""
},
{
"docid": "9efa07624d538272a5da844c74b2f56d",
"text": "Electronic health records (EHRs), digitization of patients’ health record, offer many advantages over traditional ways of keeping patients’ records, such as easing data management and facilitating quick access and real-time treatment. EHRs are a rich source of information for research (e.g. in data analytics), but there is a risk that the published data (or its leakage) can compromise patient privacy. The k-anonymity model is a widely used privacy model to study privacy breaches, but this model only studies privacy against identity disclosure. Other extensions to mitigate existing limitations in k-anonymity model include p-sensitive k-anonymity model, p+-sensitive k-anonymity model, and (p, α)-sensitive k-anonymity model. In this paper, we point out that these existing models are inadequate in preserving the privacy of end users. Specifically, we identify situations where p+sensitive k-anonymity model is unable to preserve the privacy of individuals when an adversary can identify similarities among the categories of sensitive values. We term such attack as Categorical Similarity Attack (CSA). Thus, we propose a balanced p+-sensitive k-anonymity model, as an extension of the p+-sensitive k-anonymity model. We then formally analyze the proposed model using High-Level Petri Nets (HLPN) and verify its properties using SMT-lib and Z3 solver.We then evaluate the utility of release data using standard metrics and show that our model outperforms its counterparts in terms of privacy vs. utility tradeoff. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d2ed4a8558c9ec9f794abd3cc22678e3",
"text": "Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (Axelrod et al., 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce dynamic data selection for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call gradual fine-tuning, with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline.",
"title": ""
},
{
"docid": "55eb5594f05319c157d71361880f1983",
"text": "Following the growing share of wind energy in electric power systems, several wind power forecasting techniques have been reported in the literature in recent years. In this paper, a wind power forecasting strategy composed of a feature selection component and a forecasting engine is proposed. The feature selection component applies an irrelevancy filter and a redundancy filter to the set of candidate inputs. The forecasting engine includes a new enhanced particle swarm optimization component and a hybrid neural network. The proposed wind power forecasting strategy is applied to real-life data from wind power producers in Alberta, Canada and Oklahoma, U.S. The presented numerical results demonstrate the efficiency of the proposed strategy, compared to some other existing wind power forecasting methods.",
"title": ""
}
] |
scidocsrr
|
7ee693e601fe6094db69982b3f7a0db2
|
Parallel Lattice Boltzmann Methods for CFD Applications
|
[
{
"docid": "66532253c6a60d6c406717964c308879",
"text": "We present an overview of the lattice Boltzmann method (LBM), a parallel and efficient algorithm for simulating single-phase and multiphase fluid flows and for incorporating additional physical complexities. The LBM is especially useful for modeling complicated boundary conditions and multiphase interfaces. Recent extensions of this method are described, including simulations of fluid turbulence, suspension flows, and reaction diffusion systems.",
"title": ""
}
] |
[
{
"docid": "cd068158b6bebadfb8242b6412ec5bbb",
"text": "artefacts, 65–67 built environments and, 67–69 object artefacts, 65–66 structuralism and, 66–67 See also Non–discursive technique Asymmetry, 88–89, 91 Asynchronous systems, 187 Autonomous architecture, 336–338",
"title": ""
},
{
"docid": "f14e128c17a95e8f549f822dad408133",
"text": "Capparis Spinosa L. is an aromatic plant growing wild in dry regions around the Mediterranean basin. Capparis Spinosa was shown to possess several properties such as antioxidant, antifungal, and anti-hepatotoxic actions. In this work, we aimed to evaluate immunomodulatory properties of Capparis Spinosa leaf extracts in vitro on human peripheral blood mononuclear cells (PBMCs) from healthy individuals. Using MTT assay, we identified a range of Capparis Spinosa doses, which were not toxic. Unexpectedly, we found out that Capparis Spinosa aqueous fraction exhibited an increase in cell metabolic activity, even though similar doses did not affect cell proliferation as shown by CFSE. Interestingly, Capparis Spinosa aqueous fraction appeared to induce an overall anti-inflammatory response through significant inhibition of IL-17 and induction of IL-4 gene expression when PBMCs were treated with the non toxic doses of 100 and/or 500 μg/ml. Phytoscreening analysis of the used Capparis Spinosa preparations showed that these contain tannins; sterols, alkaloids; polyphenols and flavonoids. Surprisingly, quantification assays showed that our Capparis Spinosa preparation contains low amounts of polyphenols relative to Capparis Spinosa used in other studies. This Capparis Spinosa also appeared to act as a weaker scavenging free radical agent as evidenced by DPPH radical scavenging test. Finally, polyphenolic compounds including catechin, caffeic acid, syringic acid, rutin and ferulic acid were identified by HPLC, in the Capparis spinosa preparation. Altogether, these findings suggest that our Capparis Spinosa preparation contains interesting compounds, which could be used to suppress IL-17 and to enhance IL-4 gene expression in certain inflammatory situations. Other studies are underway in order to identify the compound(s) underlying this effect.",
"title": ""
},
{
"docid": "bcd0474289ba78d44853b5d278c1d2a9",
"text": "Manifold Learning (ML) is a class of algorithms seeking a low-dimensional non-linear representation of high-dimensional data. Thus, ML algorithms are most applicable to highdimensional data and require large sample sizes to accurately estimate the manifold. Despite this, most existing manifold learning implementations are not particularly scalable. Here we present a Python package that implements a variety of manifold learning algorithms in a modular and scalable fashion, using fast approximate neighbors searches and fast sparse eigendecompositions. The package incorporates theoretical advances in manifold learning, such as the unbiased Laplacian estimator introduced by Coifman and Lafon (2006) and the estimation of the embedding distortion by the Riemannian metric method introduced by Perrault-Joncas and Meila (2013). In benchmarks, even on a single-core desktop computer, our code embeds millions of data points in minutes, and takes just 200 minutes to embed the main sample of galaxy spectra from the Sloan Digital Sky Survey— consisting of 0.6 million samples in 3750-dimensions—a task which has not previously been possible.",
"title": ""
},
{
"docid": "a79d143c1b873661378534871057cb0c",
"text": "The Internet has provided a network infrastructure with global connectivity for the games industry to develop and deploy online games. However, unlike the document interface paradigm of the World Wide Web (WWW), these online games have more stringent requirements that are not fulfilled by the Internet's best effort service model.A key characteristic of online games is the possibility of having multiple participants share the same experience. Consequently, the volatile nature of the Internet can affect the enjoyment of all, or at the very least a few, of the users. To ameliorate the impact caused by network problems that may arise during game play, game developers have adopted adaptation techniques in the design and implementation of online games. However, little is known of how the user perceives these mechanisms.This paper presents the results of a questionnaire targeted at the online gaming community to provide insight into what users really think of the Internet and its impact on their playing experience. One of the main results is to demonstrate that the existing mechanisms fail to maintain the utility of the game at all times, leading to frustration on the part of the users. In spite of this, users are not willing to pay for any service guarantees.",
"title": ""
},
{
"docid": "01572c84840fe3449dca555a087d2551",
"text": "A printed two-multiple-input multiple-output (MIMO)-antenna system incorporating a neutralization line for antenna port decoupling for wireless USB-dongle applications is proposed. The two monopoles are located on the two opposite corners of the system PCB and spaced apart by a small ground portion, which serves as a layout area for antenna feeding network and connectors for the use of standalone antennas as an optional scheme. It was found that by removing only 1.5 mm long inwards from the top edge in the small ground portion and connecting the two antennas therein with a thin printed line, the antenna port isolation can be effectively improved. The neutralization line in this study occupies very little board space, and the design requires no conventional modification to the ground plane for mitigating mutual coupling. The behavior of the neutralization line was rigorously analyzed, and the MIMO characteristics of the proposed antennas was also studied and tested in the reverberation chamber. Details of the constructed prototype are described and discussed in this paper.",
"title": ""
},
{
"docid": "195b68a3d0d12354c256c2a1ddeb2b28",
"text": "Reinforcement learning (RL) is a popular machine learning technique that has many successes in learning how to play classic style games. Applying RL to first person shooter (FPS) games is an interesting area of research as it has the potential to create diverse behaviors without the need to implicitly code them. This paper investigates the tabular Sarsa (λ) RL algorithm applied to a purpose built FPS game. The first part of the research investigates using RL to learn bot controllers for the tasks of navigation, item collection, and combat individually. Results showed that the RL algorithm was able to learn a satisfactory strategy for navigation control, but not to the quality of the industry standard pathfinding algorithm. The combat controller performed well against a rule-based bot, indicating promising preliminary results for using RL in FPS games. The second part of the research used pretrained RL controllers and then combined them by a number of different methods to create a more generalized bot artificial intelligence (AI). The experimental results indicated that RL can be used in a generalized way to control a combination of tasks in FPS bots such as navigation, item collection, and combat.",
"title": ""
},
{
"docid": "7efa3543711bc1bb6e3a893ed424b75d",
"text": "This dissertation is concerned with the creation of training data and the development of probability models for statistical parsing of English with Combinatory Categorial Grammar (CCG). Parsing, or syntactic analysis, is a prerequisite for semantic interpretation, and forms therefore an integral part of any system which requires natural language understanding. Since almost all naturally occurring sentences are ambiguous, it is not sufficient (and often impossible) to generate all possible syntactic analyses. Instead, the parser needs to rank competing analyses and select only the most likely ones. A statistical parser uses a probability model to perform this task. I propose a number of ways in which such probability models can be defined for CCG. The kinds of models developed in this dissertation, generative models over normal-form derivation trees, are particularly simple, and have the further property of restricting the set of syntactic analyses to those corresponding to a canonical derivation structure. This is important to guarantee that parsing can be done efficiently. In order to achieve high parsing accuracy, a large corpus of annotated data is required to estimate the parameters of the probability models. Most existing wide-coverage statistical parsers use models of phrase-structure trees estimated from the Penn Treebank, a 1-million-word corpus of manually annotated sentences from the Wall Street Journal. This dissertation presents an algorithm which translates the phrase-structure analyses of the Penn Treebank to CCG derivations. The resulting corpus, CCGbank, is used to train and test the models proposed in this dissertation. Experimental results indicate that parsing accuracy (when evaluated according to a comparable metric, the recovery of unlabelled word-word dependency relations), is as high as that of standard Penn Treebank parsers which use similar modelling techniques. Most existing wide-coverage statistical parsers use simple phrase-structure grammars whose syntactic analyses fail to capture long-range dependencies, and therefore do not correspond to directly interpretable semantic representations. By contrast, CCG is a grammar formalism in which semantic representations that include long-range dependencies can be built directly during the derivation of syntactic structure. These dependencies define the predicate-argument structure of a sentence, and are used for two purposes in this dissertation: First, the performance of the parser can be evaluated according to how well it recovers these dependencies. In contrast to purely syntactic evaluations, this yields a direct measure of how accurate the semantic interpretations returned by the parser are. Second, I propose a generative model that captures the local and non-local dependencies in the predicate-argument structure, and investigate the impact of modelling non-local in addition to local dependencies.",
"title": ""
},
{
"docid": "cf3e66247ab575b5a8e5fe1678c209bd",
"text": "Metamorphic testing (MT) is an effective methodology for testing those so-called ``non-testable'' programs (e.g., scientific programs), where it is sometimes very difficult for testers to know whether the outputs are correct. In metamorphic testing, metamorphic relations (MRs) (which specify how particular changes to the input of the program under test would change the output) play an essential role. However, testers may typically have to obtain MRs manually.\n In this paper, we propose a search-based approach to automatic inference of polynomial MRs for a program under test. In particular, we use a set of parameters to represent a particular class of MRs, which we refer to as polynomial MRs, and turn the problem of inferring MRs into a problem of searching for suitable values of the parameters. We then dynamically analyze multiple executions of the program, and use particle swarm optimization to solve the search problem. To improve the quality of inferred MRs, we further use MR filtering to remove some inferred MRs.\n We also conducted three empirical studies to evaluate our approach using four scientific libraries (including 189 scientific functions). From our empirical results, our approach is able to infer many high-quality MRs in acceptable time (i.e., from 9.87 seconds to 1231.16 seconds), which are effective in detecting faults with no false detection.",
"title": ""
},
{
"docid": "5b021c0223ee25535508eb1d6f63ff55",
"text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications",
"title": ""
},
{
"docid": "59b928fab5d53519a0a020b7461690cf",
"text": "Musical genres are categorical descriptions that are used to describe music. They are commonly used to structure the increasing amounts of music available in digital form on the Web and are important for music information retrieval. Genre categorization for audio has traditionally been performed manually. A particular musical genre is characterized by statistical properties related to the instrumentation, rhythmic structure and form of its members. In this work, algorithms for the automatic genre categorization of audio signals are described. More specifically, we propose a set of features for representing texture and instrumentation. In addition a novel set of features for representing rhythmic structure and strength is proposed. The performance of those feature sets has been evaluated by training statistical pattern recognition classifiers using real world audio collections. Based on the automatic hierarchical genre classification two graphical user interfaces for browsing and interacting with large audio collections have been developed.",
"title": ""
},
{
"docid": "bcfc8566cf73ec7c002dcca671e3a0bd",
"text": "of the thoracic spine revealed a 1.1 cm intradural extramedullary mass at the level of the T2 vertebral body (Figure 1a). Spinal neurosurgery was planned due to exacerbation of her chronic back pain and progressive weakness of the lower limbs at 28 weeks ’ gestation. Emergent spinal decompression surgery was performed with gross total excision of the tumour. Doppler fl ow of the umbilical artery was used preoperatively and postoperatively to monitor fetal wellbeing. Th e histological examination revealed HPC, World Health Organization (WHO) grade 2 (Figure 1b). Complete recovery was seen within 1 week of surgery. Follow-up MRI demonstrated complete removal of the tumour. We recommended adjuvant external radiotherapy to the patient in the 3rd trimester of pregnancy due to HPC ’ s high risk of recurrence. However, the patient declined radiotherapy. Routine weekly obstetric assessments were performed following surgery. At the 37th gestational week, a 2,850 g, Apgar score 7 – 8, healthy infant was delivered by caesarean section, without need of admission to the neonatal intensive care unit. Adjuvant radiotherapy was administered to the patient in the postpartum period.",
"title": ""
},
{
"docid": "77437d225dcc535fdbe5a7e66e15f240",
"text": "We are interested in automatic scene understanding from geometric cues. To this end, we aim to bring semantic segmentation in the loop of real-time reconstruction. Our semantic segmentation is built on a deep autoencoder stack trained exclusively on synthetic depth data generated from our novel 3D scene library, SynthCam3D. Importantly, our network is able to segment real world scenes without any noise modelling. We present encouraging preliminary results.",
"title": ""
},
{
"docid": "ba4d30e7ea09d84f8f7d96c426e50f34",
"text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.",
"title": ""
},
{
"docid": "b91204ac8a118fcde9a774e925f24a7e",
"text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.",
"title": ""
},
{
"docid": "063598613ce313e2ad6d2b0697e0c708",
"text": "Contour shape descriptors are among the important shape description methods. Fourier descriptors (FD) and curvature scale space descriptors (CSSD) are widely used as contour shape descriptors for image retrieval in the literature. In MPEG-7, CSSD has been proposed as one of the contour-based shape descriptors. However, no comprehensive comparison has been made between these two shape descriptors. In this paper we study and compare FD and CSSD using standard principles and standard database. The study targets image retrieval application. Our experimental results show that FD outperforms CSSD in terms of robustness, low computation, hierarchical representation, retrieval performance and suitability for efficient indexing.",
"title": ""
},
{
"docid": "78ce9ddb8fbfeb801455a76a3a6b0af2",
"text": "Deeply embedded domain-specific languages (EDSLs) intrinsically compromise programmer experience for improved program performance. Shallow EDSLs complement them by trading program performance for good programmer experience. We present Yin-Yang, a framework for DSL embedding that uses Scala macros to reliably translate shallow EDSL programs to the corresponding deep EDSL programs. The translation allows program prototyping and development in the user friendly shallow embedding, while the corresponding deep embedding is used where performance is important. The reliability of the translation completely conceals the deep em- bedding from the user. For the DSL author, Yin-Yang automatically generates the deep DSL embeddings from their shallow counterparts by reusing the core translation. This obviates the need for code duplication and leads to reliability by construction.",
"title": ""
},
{
"docid": "42908bdaa9e72da204630d2ac25ed830",
"text": "We propose FINET, a system for detecting the types of named entities in short inputs—such as sentences or tweets—with respect to WordNet’s super fine-grained type system. FINET generates candidate types using a sequence of multiple extractors, ranging from explicitly mentioned types to implicit types, and subsequently selects the most appropriate using ideas from word-sense disambiguation. FINET combats data scarcity and noise from existing systems: It does not rely on supervision in its extractors and generates training data for type selection from WordNet and other resources. FINET supports the most fine-grained type system so far, including types with no annotated training data. Our experiments indicate that FINET outperforms state-of-the-art methods in terms of recall, precision, and granularity of extracted types.",
"title": ""
},
{
"docid": "ed978d6faa32953259ec26a879ed3ce4",
"text": "A novel and simple configuration of the planar transmission line transformer using coupled microstrip lines is proposed. A design methodology is also presented. The simulated and measured results demonstrate broadband impedance transformation with good efficiency for RF and microwave circuit design. The effects of the deviation from the optimal characteristic impedance and transformation ratio are investigated as well.",
"title": ""
},
{
"docid": "a2c342240ac430c3bea93192bbe46e86",
"text": "BACKGROUND\nAcne scarring is common but surprisingly difficult to treat. Newer techniques and modifications to older ones may make this refractory problem more manageable. The 100% trichloroacetic acid (TCA) chemical reconstruction of skin scars (CROSS) method is a safe and effective single modality for the treatment of atrophic acne scars, whereas subcision appears to be a safe technique that provides significant improvement for rolling acne scars.\n\n\nOBJECTIVE\nTo compare the effect of the 100% TCA CROSS method with subcision in treating rolling acne scars.\n\n\nMETHODS\nTwenty patients of skin types III and IV with bilateral rolling acne scars received one to three sessions of the 100% TCA CROSS technique for scars on the left side of the face and subcision for scars on the right side.\n\n\nRESULTS\nThe mean decrease in size and depth of scars was significantly greater for the subcision side than the 100% TCA CROSS (p<.001). More side effects in the form of pigmentary alteration were observed with the 100% TCA CROSS method.\n\n\nCONCLUSION\nFor rolling acne scars in patients with Fitzpatrick skin types III and IV, subcision shows better results with fewer side effects than the 100% TCA CROSS technique, although further decrease in scar depth with time occurs more significantly after 100% TCA CROSS.",
"title": ""
}
] |
scidocsrr
|
d515ec7c835388527482f06e5e3f9826
|
RADAR INTERFEROMETRY AND ITS APPLICATION TO CHANGES IN THE EARTH ' S SURFACE
|
[
{
"docid": "6b467ec8262144150b17cedb3d96edcb",
"text": "We describe a new method of measuring surface currents using an interferometric synthetic aperture radar. An airborne implementation has been tested over the San Francisco Bay near the time of maximum tidal flow, resulting in a map of the east-west component of the current. Only the line-of-sight component of velocity is measured by this technique. Where the signal-to-noise ratio was strongest, statistical fluctuations of less than 4 cm s−1 were observed for ocean patches of 60×60 m.",
"title": ""
}
] |
[
{
"docid": "8e6efa696b960cf08cf1616efc123cbd",
"text": "SLAM (Simultaneous Localization and Mapping) for underwater vehicles is a challenging research topic due to the limitations of underwater localization sensors and error accumulation over long-term operations. Furthermore, acoustic sensors for mapping often provide noisy and distorted images or low-resolution ranging, while video images provide highly detailed images but are often limited due to turbidity and lighting. This paper presents a review of the approaches used in state-of-the-art SLAM techniques: Extended Kalman Filter SLAM (EKF-SLAM), FastSLAM, GraphSLAM and its application in underwater environments.",
"title": ""
},
{
"docid": "3538d14694af47dc0fb31696913da15a",
"text": "Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multiquery optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space.\nIn this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.",
"title": ""
},
{
"docid": "cf074f806c9b78947c54fb7f41167d9e",
"text": "Applications of Machine Learning to Support Dementia Care through Commercially Available O↵-the-Shelf Sensing",
"title": ""
},
{
"docid": "466da603d1cf8558fcb8b48ac52d98cf",
"text": "BACKGROUND\nThe tuberculosis (TB) epidemic in South Africa is characterised by one of the highest levels of TB/HIV co-infection and growing multidrug-resistant TB worldwide. Hospitals play a central role in the management of TB. We investigated nurses' experiences of factors influencing TB infection prevention and control (IPC) practices to identify risks associated with potential nosocomial transmission.\n\n\nMETHODS\nThe qualitative study employed a phenomenological approach, using semi-structured interviews with a quota sample of 20 nurses in a large tertiary academic hospital in Cape Town, South Africa. The data was subjected to thematic analysis.\n\n\nRESULTS\nNurses expressed concerns about the possible risk of TB transmission to both patients and staff. Factors influencing TB-IPC, and increasing the potential risk of nosocomial transmission, emerged in interconnected overarching themes. Influences related to the healthcare system included suboptimal IPC provision such as the lack of isolation facilities and personal protective equipment, and the lack of a TB-IPC policy. Further influences included inadequate TB training for staff and patients, communication barriers owing to cultural and linguistic differences between staff and patients, the excessive workload of nurses, and a sense of duty of care. Influences related to wider contextual conditions included TB concerns and stigma, and the role of traditional healers. Influences related to patient behaviour included late uptake of hospital care owing to poverty and the use of traditional medicine, and poor adherence to IPC measures by patients, family members and carers.\n\n\nCONCLUSIONS\nSeveral interconnected influences related to the healthcare system, wider contextual conditions and patient behavior could increase the potential risk of nosocomial TB transmission at hospital level. There is an urgent need for the implementation and evaluation of a comprehensive contextually appropriate TB IPC policy with the setting and auditing of standards for IPC provision and practice, adequate TB training for both staff and patients, and the establishment of a cross-cultural communication strategy, including rapid access to interpreters.",
"title": ""
},
{
"docid": "177b020fd9cd0fec6d6f01bdb6114b97",
"text": "A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.",
"title": ""
},
{
"docid": "5d3ae892c7cbe056734c9b098e018377",
"text": "Information on the Nuclear Magnetic Resonance Gyro under development by Northrop Grumman Corporation is presented. The basics of Operation are summarized, a review of the completed phases is presented, and the current state of development and progress in phase 4 is discussed. Many details have been left out for the sake of brevity, but the principles are still complete.",
"title": ""
},
{
"docid": "517454eb09e377bb157926e196094a2e",
"text": "Wireless sensor networks are one of the emerging areas which have equipped scientists with the capability of developing real-time monitoring systems. This paper discusses the development of a wireless sensor network(WSN) to detect landslides, which includes the design, development and implementation of a WSN for real time monitoring, the development of the algorithms needed that will enable efficient data collection and data aggregation, and the network requirements of the deployed landslide detection system. The actual deployment of the testbed is in the Idukki district of the Southern state of Kerala, India, a region known for its heavy rainfall, steep slopes, and frequent landslides.",
"title": ""
},
{
"docid": "d03dd25a421282d2f51427a51d0a5ab9",
"text": "This paper presents a comparison of classification methods for linguistic typology for the purpose of expanding an extensive, but sparse language resource: the World Atlas of Language Structures (WALS) (Dryer and Haspelmath, 2013). We experimented with a variety of regression and nearest-neighbor methods for use in classification over a set of 325 languages and six syntactic rules drawn from WALS. To classify each rule, we consider the typological features of the other five rules; linguistic features extracted from a word-aligned Bible in each language; and genealogical features (genus and family) of each language. In general, we find that propagating the majority label among all languages of the same genus achieves the best accuracy in label prediction. Following this, a logistic regression model that combines typological and linguistic features offers the next best performance. Interestingly, this model actually outperforms the majority labels among all languages of the same family.",
"title": ""
},
{
"docid": "0b8f4d14483d8fca51f882759f3194ad",
"text": "Verbs play a critical role in the meaning of sentences, but these ubiquitous words have received little attention in recent distributional semantics research. We introduce SimVerb-3500, an evaluation resource that provides human ratings for the similarity of 3,500 verb pairs. SimVerb-3500 covers all normed verb types from the USF free-association database, providing at least three examples for every VerbNet class. This broad coverage facilitates detailed analyses of how syntactic and semantic phenomena together influence human understanding of verb meaning. Further, with significantly larger development and test sets than existing benchmarks, SimVerb-3500 enables more robust evaluation of representation learning architectures and promotes the development of methods tailored to verbs. We hope that SimVerb-3500 will enable a richer understanding of the diversity and complexity of verb semantics and guide the development of systems that can effectively represent and interpret this meaning.",
"title": ""
},
{
"docid": "22dbd531b0769ad678533beba78fe12b",
"text": "A axial-force/torque motor (AFTM) establishes a completely new bearingless drive concept. The presented Lorentz-force-type actuator features a compact and integrated design using a very specific permanent-magnet excitation system and a concentric nonoverlapping air-gap stator winding. The end windings of the bent air-core coils, which are shaped in a circumferential rotor direction, provide active axial suspension forces. Thus, no additional (bearing) coils are needed for stable axial levitation. The four remaining degrees of freedom of the rotor are stabilized by passive magnetic ring bearings. This paper concentrates on the determination of the lumped parameters for the dynamic system modeling of the AFTM. After introducing a coordinate transformation for the decoupling of the control variables, the axial suspension force, and the drive torque, the relations for coil dimensioning are developed, followed by a discussion of the coil turn number selection process. Active levitation forces and drive torque specifications both must be concurrently fulfilled at a nominal rotor speed with only one common winding system, respecting several electrical, thermal, and mechanical boundaries likewise. Provided that the stator winding topology is designed properly, a simple closed-loop control strategy permits the autonomous manipulation of both control variables. A short presentation of the first experimental setup highlights the possible fields of application for the compact drive concept.",
"title": ""
},
{
"docid": "91b96fd6754a97b69488632a4d1d602e",
"text": "Face Super-Resolution (SR) is a domain-specific superresolution problem. The facial prior knowledge can be leveraged to better super-resolve face images. We present a novel deep end-to-end trainable Face Super-Resolution Network (FSRNet), which makes use of the geometry prior, i.e., facial landmark heatmaps and parsing maps, to super-resolve very low-resolution (LR) face images without well-aligned requirement. Specifically, we first construct a coarse SR network to recover a coarse high-resolution (HR) image. Then, the coarse HR image is sent to two branches: a fine SR encoder and a prior information estimation network, which extracts the image features, and estimates landmark heatmaps/parsing maps respectively. Both image features and prior information are sent to a fine SR decoder to recover the HR image. To generate realistic faces, we also propose the Face Super-Resolution Generative Adversarial Network (FSRGAN) to incorporate the adversarial loss into FSRNet. Further, we introduce two related tasks, face alignment and parsing, as the new evaluation metrics for face SR, which address the inconsistency of classic metrics w.r.t. visual perception. Extensive experiments show that FSRNet and FSRGAN significantly outperforms state of the arts for very LR face SR, both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "9644fc8b65e73a4754d258b206bab3eb",
"text": "Load balancing is a critical issue for the efficient operation of peer-to-peer networks. We give two new load-balancing protocols whose provable performance guarantees are within a constant factor of optimal. Our protocols refine the consistent hashing data structure that underlies the Chord (and Koorde) P2P network. Both preserve Chord's logarithmic query time and near-optimal data migration cost.Consistent hashing is an instance of the distributed hash table (DHT) paradigm for assigning items to nodes in a peer-to-peer system: items and nodes are mapped to a common address space, and nodes have to store all items residing closeby in the address space.Our first protocol balances the distribution of the key address space to nodes, which yields a load-balanced system when the DHT maps items \"randomly\" into the address space. To our knowledge, this yields the first P2P scheme simultaneously achieving O(log n) degree, O(log n) look-up cost, and constant-factor load balance (previous schemes settled for any two of the three).Our second protocol aims to directly balance the distribution of items among the nodes. This is useful when the distribution of items in the address space cannot be randomized. We give a simple protocol that balances load by moving nodes to arbitrary locations \"where they are needed.\" As an application, we use the last protocol to give an optimal implementation of a distributed data structure for range searches on ordered data.",
"title": ""
},
{
"docid": "e983898bf746ecb5ea8590f3d3beb337",
"text": "The concept of Bitcoin was first introduced by an unknown individual (or a group of people) named Satoshi Nakamoto before it was released as open-source software in 2009. Bitcoin is a peer-to-peer cryptocurrency and a decentralized worldwide payment system for digital currency where transactions take place among users without any intermediary. Bitcoin transactions are performed and verified by network nodes and then registered in a public ledger called blockchain, which is maintained by network entities running Bitcoin software. To date, this cryptocurrency is worth close to U.S. $150 billion and widely traded across the world. However, as Bitcoin’s popularity grows, many security concerns are coming to the forefront. Overall, Bitcoin security inevitably depends upon the distributed protocols-based stimulant-compatible proof-of-work that is being run by network entities called miners, who are anticipated to primarily maintain the blockchain (ledger). As a result, many researchers are exploring new threats to the entire system, introducing new countermeasures, and therefore anticipating new security trends. In this survey paper, we conduct an intensive study that explores key security concerns. We first start by presenting a global overview of the Bitcoin protocol as well as its major components. Next, we detail the existing threats and weaknesses of the Bitcoin system and its main technologies including the blockchain protocol. Last, we discuss current existing security studies and solutions and summarize open research challenges and trends for future research in Bitcoin security.",
"title": ""
},
{
"docid": "ce34bb39b5048f80e849ddf7a476d89d",
"text": "We propose a method to find the community structure in complex networks based on an extremal optimization of the value of modularity. The method outperforms the optimal modularity found by the existing algorithms in the literature giving a better understanding of the community structure. We present the results of the algorithm for computer-simulated and real networks and compare them with other approaches. The efficiency and accuracy of the method make it feasible to be used for the accurate identification of community structure in large complex networks.",
"title": ""
},
{
"docid": "4ea7fba21969fcdd2de9b4e918583af8",
"text": "Due to the explosion in the size of the WWW[1,4,5] it becomes essential to make the crawling process parallel. In this paper we present an architecture for a parallel crawler that consists of multiple crawling processes called as C-procs which can run on network of workstations. The proposed crawler is scalable, is resilient against system crashes and other event. The aim of this architecture is to efficiently and effectively crawl the current set of publically indexable web pages so that we can maximize the download rate while minimizing the overhead from parallelization",
"title": ""
},
{
"docid": "3cafa4dc683b279a9f68aa8fc50ac6ab",
"text": "This paper presents a new topic of automatic recognition of bank note serial numbers, which will not only facilitate the prevention of forgery crimes, but also have a positive impact on the economy. Among all the different currencies, we focus on the study of RMB (renminbi bank note, the paper currency used in China) serial numbers. For evaluation, a new database NUST-RMB2013 has been collected from scanned RMB images, which contains the serial numbers of 35 categories with 17,262 training samples and 7000 testing samples in total. We comprehensively implement and compare two classic and one newly merged feature extraction methods (namely gradient direction feature, Gabor feature, and CNN trainable feature), four different types of well-known classifiers (SVM, LDF, MQDF, and CNN), and five multiple classifier combination strategies (including a specially designed novel cascade method). To further improve the recognition accuracy, the enhancements of three different kinds of distortions have been tested. Since high reliability is more important than accuracy in financial applications, we introduce three rejection schemes of first rank measurement (FRM), first two ranks measurement (FTRM) and linear discriminant analysis based measurement (LDAM). All the classifiers and classifier combination schemes are combined with different rejection criteria. A novel cascade rejection measurement achieves 100% reliability with less rejection rate compared with the existing methods. Experimental results show that MQDF reaches the accuracy of 99.59% using the gradient direction feature trained with gray level normalized data; the cascade classifier combination achieves the best performance of 99.67%. The distortions have been proved to be very helpful because the performances of CNNs boost at least 0.5% by training with transformed samples. With the cascade rejection method, 100% reliability has been obtained by rejecting 1.01% test samples. & 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e13d935c4950323a589dce7fd5bce067",
"text": "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms.",
"title": ""
},
{
"docid": "6d5fb6a470fe80cbe0dac7c80f8fa9d8",
"text": "BACKGROUND\nThe Sexual Assault Resource Center (SARC) in Perth, Western Australia provides free 24-hour medical, forensic, and counseling services to persons aged over 13 years following sexual assault.\n\n\nOBJECTIVE\nThe aim of this research was to design a data management system that maintains accurate quality information on all sexual assault cases referred to SARC, facilitating audit and peer-reviewed research.\n\n\nMETHODS\nThe work to develop SARC Medical Services Clinical Information System (SARC-MSCIS) took place during 2007-2009 as a collaboration between SARC and Curtin University, Perth, Western Australia. Patient demographics, assault details, including injury documentation, and counseling sessions were identified as core data sections. A user authentication system was set up for data security. Data quality checks were incorporated to ensure high-quality data.\n\n\nRESULTS\nAn SARC-MSCIS was developed containing three core data sections having 427 data elements to capture patient's data. Development of the SARC-MSCIS has resulted in comprehensive capacity to support sexual assault research. Four additional projects are underway to explore both the public health and criminal justice considerations in responding to sexual violence. The data showed that 1,933 sexual assault episodes had occurred among 1881 patients between January 1, 2009 and December 31, 2015. Sexual assault patients knew the assailant as a friend, carer, acquaintance, relative, partner, or ex-partner in 70% of cases, with 16% assailants being a stranger to the patient.\n\n\nCONCLUSION\nThis project has resulted in the development of a high-quality data management system to maintain information for medical and forensic services offered by SARC. This system has also proven to be a reliable resource enabling research in the area of sexual violence.",
"title": ""
},
{
"docid": "4561fbad61cb72cd7e631fd2f72de762",
"text": "Graphene has been hailed as a wonderful material in electronics, and recently, it is the rising star in photonics, as well. The wonderful optical properties of graphene afford multiple functions of signal emitting, transmitting, modulating, and detection to be realized in one material. In this paper, the latest progress in graphene photonics, plasmonics, and broadband optoelectronic devices is reviewed. Particular emphasis is placed on the ability to integrate graphene photonics onto the silicon platform to afford broadband operation in light routing and amplification, which involves components like polarizer, modulator, and photodetector. Other functions like saturable absorber and optical limiter are also reviewed.",
"title": ""
}
] |
scidocsrr
|
57764c67196cebde8e4caf99dca4a24e
|
Meticillin-resistant Staphylococcus pseudintermedius: clinical challenge and treatment options.
|
[
{
"docid": "816bd541fd0f5cc509ad69cfed5d3e6e",
"text": "It has been shown that people and pets can harbour identical strains of meticillin-resistant (MR) staphylococci when they share an environment. Veterinary dermatology practitioners are a professional group with a high incidence of exposure to animals infected by Staphylococcus spp. The objective of this study was to assess the prevalence of carriage of MR Staphylococcus aureus (MRSA), MR S. pseudintermedius (MRSP) and MR S. schleiferi (MRSS) by veterinary dermatology practice staff and their personal pets. A swab technique and selective media were used to screen 171 veterinary dermatology practice staff and their respective pets (258 dogs and 160 cats). Samples were shipped by over-night carrier. Human subjects completed a 22-question survey of demographic and epidemiologic data relevant to staphylococcal transmission. The 171 human-source samples yielded six MRSA (3.5%), nine MRSP (5.3%) and four MRSS (2.3%) isolates, while 418 animal-source samples yielded eight MRSA (1.9%) 21 MRSP (5%), and two MRSS (0.5%) isolates. Concordant strains (genetically identical by pulsed-field gel electrophoresis) were isolated from human subjects and their respective pets in four of 171 (2.9%) households: MRSA from one person/two pets and MRSP from three people/three pets. In seven additional households (4.1%), concordant strains were isolated from only the pets: MRSA in two households and MRSP in five households. There were no demographic or epidemiologic factors statistically associated with either human or animal carriage of MR staphylococci, or with concordant carriage by person-pet or pet-pet pairs. Lack of statistical associations may reflect an underpowered study.",
"title": ""
}
] |
[
{
"docid": "ced8cc9329777cc01cdb3e91772a29c2",
"text": "Manually annotating clinical document corpora to generate reference standards for Natural Language Processing (NLP) systems or Machine Learning (ML) is a timeconsuming and labor-intensive endeavor. Although a variety of open source annotation tools currently exist, there is a clear opportunity to develop new tools and assess functionalities that introduce efficiencies into the process of generating reference standards. These features include: management of document corpora and batch assignment, integration of machine-assisted verification functions, semi-automated curation of annotated information, and support of machine-assisted pre-annotation. The goals of reducing annotator workload and improving the quality of reference standards are important considerations for development of new tools. An infrastructure is also needed that will support largescale but secure annotation of sensitive clinical data as well as crowdsourcing which has proven successful for a variety of annotation tasks. We introduce the Extensible Human Oracle Suite of Tools (eHOST) http://code.google.com/p/ehost that provides such functionalities that when coupled with server integration offer an end-to-end solution to carry out small or large scale as well as crowd sourced annotation projects.",
"title": ""
},
{
"docid": "fda6123a2e3c67329b689c13bda8feda",
"text": "We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide’s map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.",
"title": ""
},
{
"docid": "242686291812095c5320c1c8cae6da27",
"text": "In the modern high-performance transceivers, mixers (both upand down-converters) are required to have large dynamic range in order to meet the system specifications. The lower end of the dynamic range is indicated by the noise floor which tells how small a signal may be processed while the high end is determined by the non-linearity which causes distortion, compression and saturation of the signal and thus limits the maximum signal amplitude input to the mixer for the undistorted output. Compared to noise, the linearity requirement is much higher in mixer design because it is generally the limiting factor to the transceiver’s linearity. Therefore, this paper will emphasize on the linearization techniques for analog multipliers and mixers, which have been a very active research area since 1960s.",
"title": ""
},
{
"docid": "27461d678b02fff9a1aaf5621f5b347a",
"text": "Despite the promise of technology in education, many practicing teachers face several challenges when trying to effectively integrate technology into their classroom instruction. Additionally, while national statistics cite a remarkable improvement in access to computer technology tools in schools, teacher surveys show consistent declines in the use and integration of computer technology to enhance student learning. This article reports on primary technology integration barriers that mathematics teachers identified when using technology in their classrooms. Suggestions to overcome some of these barriers are also provided.",
"title": ""
},
{
"docid": "408d3db3b2126990611fdc3a62a985ea",
"text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "fde2eb0bb00d2173719f9f5715faa9b9",
"text": "Multi-instance learning, like other machine learning and data mining tasks, requires distance metrics. Although metric learning methods have been studied for many years, metric learners for multi-instance learning remain almost untouched. In this paper, we propose a framework called Multi-Instance MEtric Learning (MIMEL) to learn an appropriate distance under the multi-instance setting. The distance metric between two bags is defined using the Mahalanobis distance function. The problem is formulated by minimizing the KL divergence between two multivariate Gaussians under the constraints of maximizing the between-class bag distance and minimizing the within-class bag distance. To exploit the mechanism of how instances determine bag labels in multi-instance learning, we design a nonparametric density-estimation-based weighting scheme to assign higher “weights†to the instances that are more likely to be positive in positive bags. The weighting scheme itself has a small workload, which adds little extra computing costs to the proposed framework. Moreover, to further boost the classification accuracy, a kernel version of MIMEL is presented. We evaluate MIMEL, using not only several typical multi-instance tasks, but also two activity recognition datasets. The experimental results demonstrate that MIMEL achieves better classification accuracy than many state-of-the-art distance based algorithms or kernel methods for multi-instance learning.",
"title": ""
},
{
"docid": "6a0f8e2858ca4c67b281d4130ded4eba",
"text": "This paper presents a novel design of an omni-directional spherical robot that is mainly composed of a lucent ball-shaped shell and an internal driving unit. Two motors installed on the internal driving unit are used to realize the omni-directional motion of the robot, one motor is used to make the robot move straight and another is used to make it steer. Its motion analysis, kinematics modeling and controllability analysis are presented. Two typical motion simulations show that the unevenness of ground has big influence on the open-loop trajectory tracking of the robot. At last, motion performance of this spherical robot in several typical environments is presented with prototype experiments.",
"title": ""
},
{
"docid": "3ff330ab15962b09584e1636de7503ea",
"text": "By diverting funds away from legitimate partners (a.k.a publishers), click fraud represents a serious drain on advertising budgets and can seriously harm the viability of the internet advertising market. As such, fraud detection algorithms which can identify fraudulent behavior based on user click patterns are extremely valuable. Based on the BuzzCity dataset, we propose a novel approach for click fraud detection which is based on a set of new features derived from existing attributes. The proposed model is evaluated in terms of the resulting precision, recall and the area under the ROC curve. A final ensemble model based on 6 different learning algorithms proved to be stable with respect to all 3 performance indicators. Our final model shows improved results on training, validation and test datasets, thus demonstrating its generalizability to different datasets.",
"title": ""
},
{
"docid": "bf49aafc53fd8083d5f4e7e015443a71",
"text": "BACKGROUND\nThree intrinsic connectivity networks in the brain, namely the central executive, salience, and default mode networks, have been identified as crucial to the understanding of higher cognitive functioning, and the functioning of these networks has been suggested to be impaired in psychopathology, including posttraumatic stress disorder (PTSD).\n\n\nOBJECTIVE\n1) To describe three main large-scale networks of the human brain; 2) to discuss the functioning of these neural networks in PTSD and related symptoms; and 3) to offer hypotheses for neuroscientifically-informed interventions based on treating the abnormalities observed in these neural networks in PTSD and related disorders.\n\n\nMETHODS\nLiterature relevant to this commentary was reviewed.\n\n\nRESULTS\nIncreasing evidence for altered functioning of the central executive, salience, and default mode networks in PTSD has been demonstrated. We suggest that each network is associated with specific clinical symptoms observed in PTSD, including cognitive dysfunction (central executive network), increased and decreased arousal/interoception (salience network), and an altered sense of self (default mode network). Specific testable neuroscientifically-informed treatments aimed to restore each of these neural networks and related clinical dysfunction are proposed.\n\n\nCONCLUSIONS\nNeuroscientifically-informed treatment interventions will be essential to future research agendas aimed at targeting specific PTSD and related symptoms.",
"title": ""
},
{
"docid": "fd7b9c5ab4379a277f0b39d6f54bcc18",
"text": "This article presents two probabilistic models for answering ranking in the multilingual question-answering (QA) task, which finds exact answers to a natural language question written in different languages. Although some probabilistic methods have been utilized in traditional monolingual answer-ranking, limited prior research has been conducted for answer-ranking in multilingual question-answering with formal methods. This article first describes a probabilistic model that predicts the probabilities of correctness for individual answers in an independent way. It then proposes a novel probabilistic method to jointly predict the correctness of answers by considering both the correctness of individual answers as well as their correlations. As far as we know, this is the first probabilistic framework that proposes to model the correctness and correlation of answer candidates in multilingual question-answering and provide a novel approach to design a flexible and extensible system architecture for answer selection in multilingual QA. An extensive set of experiments were conducted to show the effectiveness of the proposed probabilistic methods in English-to-Chinese and English-to-Japanese cross-lingual QA, as well as English, Chinese, and Japanese monolingual QA using TREC and NTCIR questions.",
"title": ""
},
{
"docid": "ff5f7772a0a578cfe1dd08816af8e2e7",
"text": "Moisture-associated skin damage (MASD) occurs when there is prolonged exposure of the skin to excessive amounts of moisture from incontinence, wound exudate or perspiration. Incontinenceassociated dermatitis (IAD) relates specifically to skin breakdown from faecal and/or urinary incontinence (Beeckman et al, 2009), and has been defined as erythema and oedema of the skin surface, which may be accompanied by bullae with serous exudate, erosion or secondary cutaneous infection (Gray et al, 2012). IAD may also be referred to as a moisture lesion, moisture ulcer, perineal dermatitis or diaper dermatitis (Ousey, 2012). The effects of ageing on the skin are known to affect skin integrity, as is the underdeveloped nature of very young skin; as such, elderly patients and neonates are particularly vulnerable to damage from moisture (Voegeli, 2007). The increase in moisture resulting from episodes of incontinence is exacerbated due to bacterial and enzymatic activity associated with urine and faeces, particularly when both are present, which leads to an increase in skin pH alongside over-hydration of the skin surface. This damages the natural protection of the acid mantle, the skin’s naturally acidic pH, which is an important defence mechanism against external irritants and microorganisms. This damage leads to the breakdown of vulnerable skin and increased susceptibility to secondary infection (Beeckman et al, 2009). It has become well recognised that presence of IAD greatly increases the likelihood of pressure ulcer development, since over-hydrated skin is much more susceptible to damage by extrinsic factors such as pressure, friction and shear as compared with normal skin (Clarke et al, 2010). While it is important to firstly understand that pressure and moisture damage are separate aetiologies and, secondly, be able to recognise the clinical differences in presentation, one of the factors to consider for prevention of pressure ulcers is minimising exposure to moisture/ incontinence. Another important consideration with IAD is the effect on the patient. IAD can be painful and debilitating, and has been associated with reduced quality of life. It can also be time-consuming and expensive to treat, which has an impact on clinical resources and financial implications (Doughty et al, 2012). IAD is known to impact on direct Incontinence-associated dermatitis (IAD) relates to skin breakdown from exposure to urine or faeces, and its management involves implementation of structured skin care regimens that incorporate use of appropriate skin barrier products to protect the skin from exposure to moisture and irritants. Medi Derma-Pro Foam & Spray Cleanser and Medi Derma-Pro Skin Protectant Ointment are recent additions to the Total Barrier ProtectionTM (Medicareplus International) range indicated for management of moderateto-severe IAD and other moisture-associated skin damage. This article discusses a series of case studies and product evaluations performed to determine clinical outcomes and clinician feedback based on use of the Medi Derma-Pro skin barrier products to manage IAD. Results showed improvements to patients’ skin condition following use of Medi Derma-Pro, and the cleanser and skin protectant ointment were considered better than or the same as the most equivalent products on the market.",
"title": ""
},
{
"docid": "94da9faa1ff45cfc5c8a8032d89cdd8f",
"text": "The RNA genome of human immunodeficiency virus type 1 (HIV-1) is enclosed in a cone-shaped capsid shell that disassembles following cell entry via a process known as uncoating. During HIV-1 infection, the capsid is important for reverse transcription and entry of the virus into the target cell nucleus. The small molecule PF74 inhibits HIV-1 infection at early stages by binding to the capsid and perturbing uncoating. However, the mechanism by which PF74 alters capsid stability and reduces viral infection is presently unknown. Here, we show, using atomic force microscopy (AFM), that binding of PF74 to recombinant capsid-like assemblies and to HIV-1 isolated cores stabilizes the capsid in a concentration-dependent manner. At a PF74 concentration of 10 μM, the mechanical stability of the core is increased to a level similar to that of the intrinsically hyperstable capsid mutant E45A. PF74 also prevented the complete disassembly of HIV-1 cores normally observed during 24 h of reverse transcription. Specifically, cores treated with PF74 only partially disassembled: the main body of the capsid remained intact and stiff, and a cap-like structure dissociated from the narrow end of the core. Moreover, the internal coiled structure that was observed to form during reverse transcription in vitro persisted throughout the duration of the measurement (∼24 h). Our results provide direct evidence that PF74 directly stabilizes the HIV-1 capsid lattice, thereby permitting reverse transcription while interfering with a late step in uncoating.IMPORTANCE The capsid-binding small molecule PF74 inhibits HIV-1 infection at early stages and perturbs uncoating. However, the mechanism by which PF74 alters capsid stability and reduces viral infection is presently unknown. We recently introduced time-lapse atomic force microscopy to study the morphology and physical properties of HIV-1 cores during the course of reverse transcription. Here, we apply this AFM methodology to show that PF74 prevented the complete disassembly of HIV-1 cores normally observed during 24 h of reverse transcription. Specifically, cores with PF74 only partially disassembled: the main body of the capsid remained intact and stiff, but a cap-like structure dissociated from the narrow end of the core HIV-1. Our result provides direct evidence that PF74 directly stabilizes the HIV-1 capsid lattice.",
"title": ""
},
{
"docid": "ab00048e25a3852c1f75014ac2529d52",
"text": "This paper describes a reference-clock-free, high-time-resolution on-chip timing jitter measurement circuit using a self-referenced clock and a cascaded time difference amplifier (TDA) with duty-cycle compensation. A self-referenced clock with multiples of the clock period removes the necessity for a reference clock. In addition, a cascaded TDA with duty-cycle compensation improves the time resolution while maintaining the operational speed. Test chips were designed and fabricated using 65 nm and 40 nm CMOS technologies. The areas occupied by the circuits are 1350 μm2 (with TDA, 65 nm), 490 μm2 (without TDA, 65 nm), 470 μm2 (with TDA, 40 nm), and 112 μm2 (without TDA, 40 nm). Time resolutions of 31 fs (with TDA) and 2.8 ps (without TDA) were achieved. The proposed new architecture provides all-digital timing jitter measurement with fine-time-resolution measurement capability, without requiring a reference clock.",
"title": ""
},
{
"docid": "85fb2cb99e5320ddde182d6303164da8",
"text": "The uncertainty about whether, in China, the genus Melia (Meliaceae) consists of one species (M. azedarach Linnaeus) or two species (M. azedarach and M. toosendan Siebold & Zuccarini) remains to be clarified. Although the two putative species are morphologically distinguishable, genetic evidence supporting their taxonomic separation is lacking. Here, we investigated the genetic diversity and population structure of 31 Melia populations across the natural distribution range of the genus in China. We used sequence-related amplified polymorphism (SRAP) markers and obtained 257 clearly defined bands amplified by 20 primers from 461 individuals. The polymorphic loci (P) varied from 35.17% to 76.55%, with an overall mean of 58.24%. Nei’s gene diversity (H) ranged from 0.13 to 0.31, with an overall mean of 0.20. Shannon’s information index (I) ranged from 0.18 to 0.45, with an average of 0.30. The genetic diversity of the total population (Ht) and within populations (Hs) was 0.37 ̆ 0.01 and 0.20 ̆ 0.01, respectively. Population differentiation was substantial (Gst = 0.45), and gene flow was low. Of the total variation, 31.41% was explained by differences among putative species, 19.17% among populations within putative species, and 49.42% within populations. Our results support the division of genus Melia into two species, which is consistent with the classification based on the morphological differentiation.",
"title": ""
},
{
"docid": "2c14b3968aadadaa62f569acccb37d46",
"text": "The main objective of this paper is to review the technologies and models used in the Automatic music transcription system. Music Information Retrieval is a key problem in the field of music signal analysis and this can be achieved with the use of music transcription systems. It has proven to be a very difficult issue because of the complex and deliberately overlapped spectral structure of musical harmonies. Generally, the music transcription systems branched as automatic and semi-automatic approaches based on the user interventions needed in the transcription system. Among these we give a close view of the automatic music transcription systems. Different models and techniques were proposed so far in the automatic music transcription systems. However the performance of the systems derived till now not completely matched to the performance of a human expert. In this paper we go through the techniques used previously for the music transcription and discuss the limitations with them. Also, we give some directions for the enhancement of the music transcription system and this can be useful for the researches to develop fully automatic music transcription system.",
"title": ""
},
{
"docid": "2a39202664217724ea0a49ceb83a82af",
"text": "This article proposes a competitive divide-and-conquer algorithm for solving large-scale black-box optimization problems for which there are thousands of decision variables and the algebraic models of the problems are unavailable. We focus on problems that are partially additively separable, since this type of problem can be further decomposed into a number of smaller independent subproblems. The proposed algorithm addresses two important issues in solving large-scale black-box optimization: (1) the identification of the independent subproblems without explicitly knowing the formula of the objective function and (2) the optimization of the identified black-box subproblems. First, a Global Differential Grouping (GDG) method is proposed to identify the independent subproblems. Then, a variant of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is adopted to solve the subproblems resulting from its rotation invariance property. GDG and CMA-ES work together under the cooperative co-evolution framework. The resultant algorithm, named CC-GDG-CMAES, is then evaluated on the CEC’2010 large-scale global optimization (LSGO) benchmark functions, which have a thousand decision variables and black-box objective functions. The experimental results show that, on most test functions evaluated in this study, GDG manages to obtain an ideal partition of the index set of the decision variables, and CC-GDG-CMAES outperforms the state-of-the-art results. Moreover, the competitive performance of the well-known CMA-ES is extended from low-dimensional to high-dimensional black-box problems.",
"title": ""
},
{
"docid": "335a551d08afd6af7d90b35b2df2ecc4",
"text": "The interpretation of colonic biopsies related to inflammatory conditions can be challenging because the colorectal mucosa has a limited repertoire of morphologic responses to various injurious agents. Only few processes have specific diagnostic features, and many of the various histological patterns reflect severity and duration of the disease. Importantly the correlation with endoscopic and clinical information is often cardinal to arrive at a specific diagnosis in many cases.",
"title": ""
},
{
"docid": "a0aeb6f8f888fe53e7de4bad385c55fe",
"text": "Data transmission, storage and processing are the integral parts of today’s information systems. Transmission and storage of huge volume of data is a critical task in spite of the advancements in the integrated circuit technology and communication. In order to store and transmit such a data as it is, requires larger memory and increased bandwidth utilization. This in turn increases the hardware and transmission cost. Hence, before storage or transmission the size of data has to be reduced without affecting the information content of the data. Among the various encoding algorithms, the Lempel Ziv Marcov chain Algorithm (LZMA) algorithm which is used in 7zip was proved to be effective in unknown byte stream compression for reliable lossless data compression. However the encoding speed of software based coder is slow compared to the arrival time of real time data. Hence hardware implementation is needed since number of instructions processed per unit time depends directly on system clock. The aim of this work is to implement the LZMA algorithm on SPARTAN 3E FPGA to design hardware encoder/decoder with reduces circuit size and cost of storage. General Terms Data Compression, VLSI",
"title": ""
},
{
"docid": "eb2e440b20fa3a3d99f70f4b89f6c216",
"text": "The National Library of Medicine (NLM) is developing a digital chest X-ray (CXR) screening system for deployment in resource constrained communities and developing countries worldwide with a focus on early detection of tuberculosis. A critical component in the computer-aided diagnosis of digital CXRs is the automatic detection of the lung regions. In this paper, we present a nonrigid registration-driven robust lung segmentation method using image retrieval-based patient specific adaptive lung models that detects lung boundaries, surpassing state-of-the-art performance. The method consists of three main stages: 1) a content-based image retrieval approach for identifying training images (with masks) most similar to the patient CXR using a partial Radon transform and Bhattacharyya shape similarity measure, 2) creating the initial patient-specific anatomical model of lung shape using SIFT-flow for deformable registration of training masks to the patient CXR, and 3) extracting refined lung boundaries using a graph cuts optimization approach with a customized energy function. Our average accuracy of 95.4% on the public JSRT database is the highest among published results. A similar degree of accuracy of 94.1% and 91.7% on two new CXR datasets from Montgomery County, MD, USA, and India, respectively, demonstrates the robustness of our lung segmentation approach.",
"title": ""
},
{
"docid": "d047231a67ca02c525d174b315a0838d",
"text": "The goal of this article is to review the progress of three-electron spin qubits from their inception to the state of the art. We direct the main focus towards the exchange-only qubit (Bacon et al 2000 Phys. Rev. Lett. 85 1758-61, DiVincenzo et al 2000 Nature 408 339) and its derived versions, e.g. the resonant exchange (RX) qubit, but we also discuss other qubit implementations using three electron spins. For each three-spin qubit we describe the qubit model, the envisioned physical realization, the implementations of single-qubit operations, as well as the read-out and initialization schemes. Two-qubit gates and decoherence properties are discussed for the RX qubit and the exchange-only qubit, thereby completing the list of requirements for quantum computation for a viable candidate qubit implementation. We start by describing the full system of three electrons in a triple quantum dot, then discuss the charge-stability diagram, restricting ourselves to the relevant subsystem, introduce the qubit states, and discuss important transitions to other charge states (Russ et al 2016 Phys. Rev. B 94 165411). Introducing the various qubit implementations, we begin with the exchange-only qubit (DiVincenzo et al 2000 Nature 408 339, Laird et al 2010 Phys. Rev. B 82 075403), followed by the RX qubit (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502), the spin-charge qubit (Kyriakidis and Burkard 2007 Phys. Rev. B 75 115324), and the hybrid qubit (Shi et al 2012 Phys. Rev. Lett. 108 140503, Koh et al 2012 Phys. Rev. Lett. 109 250503, Cao et al 2016 Phys. Rev. Lett. 116 086801, Thorgrimsson et al 2016 arXiv:1611.04945). The main focus will be on the exchange-only qubit and its modification, the RX qubit, whose single-qubit operations are realized by driving the qubit at its resonant frequency in the microwave range similar to electron spin resonance. Two different types of two-qubit operations are presented for the exchange-only qubits which can be divided into short-ranged and long-ranged interactions. Both of these interaction types are expected to be necessary in a large-scale quantum computer. The short-ranged interactions use the exchange coupling by placing qubits next to each other and applying exchange-pulses (DiVincenzo et al 2000 Nature 408 339, Fong and Wandzura 2011 Quantum Inf. Comput. 11 1003, Setiawan et al 2014 Phys. Rev. B 89 085314, Zeuch et al 2014 Phys. Rev. B 90 045306, Doherty and Wardrop 2013 Phys. Rev. Lett. 111 050503, Shim and Tahan 2016 Phys. Rev. B 93 121410), while the long-ranged interactions use the photons of a superconducting microwave cavity as a mediator in order to couple two qubits over long distances (Russ and Burkard 2015 Phys. Rev. B 92 205412, Srinivasa et al 2016 Phys. Rev. B 94 205421). The nature of the three-electron qubit states each having the same total spin and total spin in z-direction (same Zeeman energy) provides a natural protection against several sources of noise (DiVincenzo et al 2000 Nature 408 339, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Kempe et al 2001 Phys. Rev. A 63 042307, Russ and Burkard 2015 Phys. Rev. B 91 235411). The price to pay for this advantage is an increase in gate complexity. We also take into account the decoherence of the qubit through the influence of magnetic noise (Ladd 2012 Phys. Rev. B 86 125408, Mehl and DiVincenzo 2013 Phys. Rev. B 87 195309, Hung et al 2014 Phys. Rev. B 90 045308), in particular dephasing due to the presence of nuclear spins, as well as dephasing due to charge noise (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434), fluctuations of the energy levels on each dot due to noisy gate voltages or the environment. Several techniques are discussed which partly decouple the qubit from magnetic noise (Setiawan et al 2014 Phys. Rev. B 89 085314, West and Fong 2012 New J. Phys. 14 083002, Rohling and Burkard 2016 Phys. Rev. B 93 205434) while for charge noise it is shown that it is favorable to operate the qubit on the so-called '(double) sweet spots' (Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434, Malinowski et al 2017 arXiv: 1704.01298), which are least susceptible to noise, thus providing a longer lifetime of the qubit.",
"title": ""
}
] |
scidocsrr
|
d51e4e4373fd0a31e668d52596497efc
|
DeepGRU: Deep Gesture Recognition Utility
|
[
{
"docid": "af572a43542fde321e18675213f635ae",
"text": "The representation of 3D pose plays a critical role for 3D action and gesture recognition. Rather than representing a 3D pose directly by its joint locations, in this paper, we propose a Deformable Pose Traversal Convolution Network that applies one-dimensional convolution to traverse the 3D pose for its representation. Instead of fixing the receptive field when performing traversal convolution, it optimizes the convolution kernel for each joint, by considering contextual joints with various weights. This deformable convolution better utilizes the contextual joints for action and gesture recognition and is more robust to noisy joints. Moreover, by feeding the learned pose feature to a LSTM, we perform end-to-end training that jointly optimizes 3D pose representation and temporal sequence recognition. Experiments on three benchmark datasets validate the competitive performance of our proposed method, as well as its efficiency and robustness to handle noisy joints of pose.",
"title": ""
}
] |
[
{
"docid": "92c735a70f5e6ee8ce7fd6f5c0a097c5",
"text": "In this paper, a near-optimum active rectifier is proposed to achieve well-optimized power conversion efficiency (PCE) and voltage conversion ratio (VCR) under various process, voltage, temperature (PVT) and loading conditions. The near-optimum operation includes: eliminated reverse current loss and maximized conduction time achieved by the proposed sampling-based real-time calibrations with automatic circuit-delay compensation for both on-and off-time of active diodes considering PVT variations; and power stage optimizations with adaptive sizing over a wide loading range. The design is fabricated in TSMC 65 nm process with standard I/O devices. Measurement results show more than 36% and 17% improvement in PCE and VCR, respectively, by the proposed techniques. A peak PCE of 94.8% with an 80 Ω loading, a peak VCR of 98.7% with 1 kΩ loading, and a maximum output power of 248.1 mW are achieved with 2.5 V input amplitude.",
"title": ""
},
{
"docid": "8f57603ee7ca4421e111f716e1205322",
"text": "By experiments on cells (neurons, hepatocytes, and fibroblasts) that are targets for thyroid hormones and a randomized clinical trial on iatrogenic hyperthyroidism, we validated the concept that L-carnitine is a peripheral antagonist of thyroid hormone action. In particular, L-carnitine inhibits both triiodothyronine (T3) and thyroxine (T4) entry into the cell nuclei. This is relevant because thyroid hormone action is mainly mediated by specific nuclear receptors. In the randomized trial, we showed that 2 and 4 grams per day of oral L-carnitine are capable of reversing hyperthyroid symptoms (and biochemical changes in the hyperthyroid direction) as well as preventing (or minimizing) the appearance of hyperthyroid symptoms (or biochemical changes in the hyperthyroid direction). It is noteworthy that some biochemical parameters (thyrotropin and urine hydroxyproline) were refractory to the L-carnitine inhibition of thyroid hormone action, while osteocalcin changed in the hyperthyroid direction, but with a beneficial end result on bone. A very recent clinical observation proved the usefulness of L-carnitine in the most serious form of hyperthyroidism: thyroid storm. Since hyperthyroidism impoverishes the tissue deposits of carnitine, there is a rationale for using L-carnitine at least in certain clinical settings.",
"title": ""
},
{
"docid": "11ed66cfb1a686ce46b1ad0ec6cf5d13",
"text": "OBJECTIVE\nTo evaluate a novel ultrasound measurement, the prefrontal space ratio (PFSR), in second-trimester trisomy 21 and euploid fetuses.\n\n\nMETHODS\nStored three-dimensional volumes of fetal profiles from 26 trisomy 21 fetuses and 90 euploid fetuses at 15-25 weeks' gestation were examined. A line was drawn between the leading edge of the mandible and the maxilla (MM line) and extended in front of the forehead. The ratio of the distance between the leading edge of the skull and that of the skin (d(1)) to the distance between the skin and the point where the MM line was intercepted (d(2)) was calculated (d(2)/d(1)). The distributions of PFSR in trisomy 21 and euploid fetuses were compared, and the relationship with gestational age in each group was evaluated by Spearman's rank correlation coefficient (r(s) ).\n\n\nRESULTS\nThe PFSR in trisomy 21 fetuses (mean, 0.36; range, 0-0.81) was significantly lower than in euploid fetuses (mean, 1.48; range, 0.85-2.95; P < 0.001 (Mann-Whitney U-test)). There was no significant association between PFSR and gestational age in either trisomy 21 (r(s) = 0.25; 95% CI, - 0.15 to 0.58) or euploid (r(s) = 0.06; 95% CI, - 0.15 to 0.27) fetuses.\n\n\nCONCLUSION\nThe PFSR appears to be a highly sensitive and specific marker of trisomy 21 in the second trimester of pregnancy.",
"title": ""
},
{
"docid": "c4044ab0e304c3bc5cf92995438cbe3d",
"text": "Several recent research efforts in the biometrics have focused on developing personal identification using very low-resolution imaging resulting from widely deployed surveillance cameras and mobile devices. Identification of human faces using such low-resolution imaging has shown promising results and has shown its utility for range of applications (surveillance). This paper investigates contactless identification of such low resolution (∼ 50 dpi) fingerprint images acquired using webcam. The acquired images are firstly subjected to robust preprocessing steps to extract region of interest and normalize uneven illumination. We extract localized feature information and effectively incorporate this local information into matching stage. The experimental results are presented on two session database of 156 subjects acquired over a period of 11 months and achieve average rank-one identification accuracy of 93.97%. The achieved results are highly promising to invite attention for range of applications, including surveillance, and sprung new directions for further research efforts.",
"title": ""
},
{
"docid": "6981598efd4a70f669b5abdca47b7ea1",
"text": "The in-flight alignment is a critical stage for airborne inertial navigation system/Global Positioning System (INS/GPS) applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain a satisfying performance. Due to the airborne dynamics, the in-flight alignment is much more difficult than the alignment on the ground. An optimization-based coarse alignment approach that uses GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae is proposed. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to 1 deg accuracy in 10 s. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.",
"title": ""
},
{
"docid": "4851b83b4ef6efa36777c28be8548c8d",
"text": "The finite element methodology has become a standard framework for approximating the solution to the Poisson-Boltzmann equation in many biological applications. In this article, we examine the numerical efficacy of least-squares finite element methods for the linearized form of the equations. In particular, we highlight the utility of a first-order form, noting optimality, control of the flux variables, and flexibility in the formulation, including the choice of elements. We explore the impact of weighting and the choice of elements on conditioning and adaptive refinement. In a series of numerical experiments, we compare the finite element methods when applied to the problem of computing the solvation free energy for realistic molecules of varying size.",
"title": ""
},
{
"docid": "74273502995ceaac87737d274379d7dc",
"text": "Majority of the systems designed to handle big RDF data rely on a single high-end computer dedicated to a certain RDF dataset and do not easily scale out, at the same time several clustered solution were tested and both the features and the benchmark results were unsatisfying. In this paper we describe a system designed to tackle such issues, a system that connects RDF4J and Apache HBase in order to receive an extremely scalable RDF store.",
"title": ""
},
{
"docid": "573b563cfc7eb96552a906fb9263ea6d",
"text": "Supply chain is complex today. Multi-echelon, highly disjointed, and geographically spread are some of the cornerstones of today’s supply chain. All these together with different governmental policies and human behavior make it almost impossible to probe incidents and trace events in case of supply chain disruptions. In effect, an end-to-end supply chain, from the most basic raw material to the final product in a customer’s possession, is opaque. The inherent cost involved in managing supply chain intermediaries, their reliability, traceability, and transparency further complicate the supply chain. The solution to such complicated problems lies in improving supply chain transparency. This is now possible with the concept of blockchain. The usage of blockchain in a financial transaction is well known. This paper reviews blockchain technology, which is changing the face of supply chain and bringing in transparency and authenticity. This paper first discusses the history and evolution of blockchain from the bitcoin network, and goes on to explore the protocols. The author takes a deep dive into the design of blockchain, exploring its five pillars and three-layered architecture, which enables most of the blockchains today. With the architecture, the author focuses on the applications, use cases, road map, and challenges for blockchain in the supply chain domain as well as the synergy of blockchain with enterprise applications. It analyzes the integration of the enterprise resource planning (ERP) system of the supply chain domain with blockchain. It also explores the three distinct growth areas: ERP-blockchain supply chain use cases, the middleware for connecting the blockchain with ERP, and blockchain as a service (BaaS). The paper ends with a brief conclusion and a discussion.",
"title": ""
},
{
"docid": "7b7a0b0b6a36789834c321d04c2e2f8f",
"text": "In the present paper we propose and evaluate a framework for detection and classification of plant leaf/stem diseases using image processing and neural network technique. The images of plant leaves affected by four types of diseases namely early blight, late blight, powdery-mildew and septoria has been considered for study and evaluation of feasibility of the proposed method. The color transformation structures were obtained by converting images from RGB to HSI color space. The Kmeans clustering algorithm was used to divide images into clusters for demarcation of infected area of the leaves. After clustering, the set of color and texture features viz. moment, mean, variance, contrast, correlation and entropy were extracted based on Color Co-occurrence Method (CCM). A feed forward back propagation neural network was configured and trained using extracted set of features and subsequently utilized for detection of leaf diseases. Keyword: Color Co-Occurrence Method, K-Means, Feed Forward Neural Network",
"title": ""
},
{
"docid": "b81b29c232fb9cb5dcb2dd7e31003d77",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "2d718fdaecb286ef437b81d2a31383dd",
"text": "In this paper, we present a novel non-parametric polygonal approximation algorithm for digital planar curves. The proposed algorithm first selects a set of points (called cut-points) on the contour which are of very ‘high’ curvature. An optimization procedure is then applied to find adaptively the best fitting polygonal approximations for the different segments of the contour as defined by the cut-points. The optimization procedure uses one of the efficiency measures for polygonal approximation algorithms as the objective function. Our algorithm adaptively locates segments of the contour with different levels of details. The proposed algorithm follows the contour more closely where the level of details on the curve is high, while addressing noise by using suppression techniques. This makes the algorithm very robust for noisy, real-life contours having different levels of details. The proposed algorithm performs favorably when compared with other polygonal approximation algorithms using the popular shapes. In addition, the effectiveness of the algorithm is shown by measuring its performance over a large set of handwritten Arabic characters and MPEG7 CE Shape-1 Part B database. Experimental results demonstrate that the proposed algorithm is very stable and robust compared with other algorithms. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "40e3df04c4ca2bb2b11459c4dd2fcd10",
"text": "OBJECTIVES\nTo highlight the morphodynamic anatomical mechanisms that influence the results of rhinoplasty. To present the technical modalities of nasal dorsum preservation rhinoplasties. To determine the optimized respective surgical indications of the two main techniques of rhinoplasty: interruption rhinoplasty versus conservative rhinoplasty.\n\n\nMATERIALS AND METHODS\nBased on anatomical dissections and initial morphodynamic studies carried out on 100 anatomical specimens, a prospective study of a continuous series of 400 patients operated of primary reduction rhinoplasty or septo-rhinoplasty by one of authors (YS) has been undertaken over a period of ten years (1995-2005) in order to optimize the surgical management of the nasal hump. The studied parameters were: (1) surgical safety, (2) quality of early and late aesthetic result, (3) quality of the functional result, (4) ease of the technical realization of a possible secondary rhinoplasty. The other selected criteria were function of the different nasal hump morphotypes and the expressed wishes of the patients.\n\n\nRESULTS\nThe anatomical and morphodynamic studies made it possible to better understand the role of the \"M\" double-arch shape of the nose and the role of the cartilaginous buttresses not only as a function but also the anatomy and the aesthetics of the nose. It is necessary to preserve or repair the arche structures of the septo-triangular and alo-columellar sub-units. The conservative technique, whose results appear much more natural aesthetically, functionally satisfactory and durable over the long term, must be favoured in particular in man and in cases presenting a risk of collapse of the nasal valve.\n\n\nCONCLUSION\nThe rhinoplastician must be able to propose, according to the patient's wishes and in view of the results of the morphological analysis, the most adapted procedure according to his own surgical training but by supporting conservation of the osteo-cartilaginous vault whenever possible.",
"title": ""
},
{
"docid": "ab7e012ac498cf22896b0ff09d7e0d29",
"text": "This paper studies equilibrium asset pricing with liquidity risk — the risk arising from unpredictable changes in liquidity over time. It is shown that a security’s required return depends on its expected illiquidity and on the covariances of its own return and illiquidity with market return and market illiquidity. This gives rise to a liquidityadjusted capital asset pricing model. Further, if a security’s liquidity is persistent, a shock to its illiquidity results in low contemporaneous returns and high predicted future returns. Empirical evidence based on cross-sectional tests is consistent with liquidity risk being priced. We are grateful for conversations with Andrew Ang, Joseph Chen, Sergei Davydenko, Francisco Gomes, Joel Hasbrouck, Andrew Jackson, Tim Johnson, Martin Lettau, Anthony Lynch, Stefan Nagel, Dimitri Vayanos, Luis Viceira, Jeff Wurgler, and seminar participants at London Business School, New York University, the National Bureau of Economic Research (NBER) Summer Institute 2002, and the Five Star Conference 2002. We are especially indebted to Yakov Amihud for being generous with his time in guiding us through the empirical tests. All errors remain our own. Acharya is at London Business School and is a Research Affiliate of the Centre for Economic Policy Research (CEPR). Address: London Business School, Regent’s Park, London NW1 4SA, UK. Phone: +44 (0)20 7262 5050 x 3535. Fax: +44 (0)20 7724 3317. Email: [email protected]. Web: http://www.london.edu/faculty/vacharya Pedersen is at the Stern School of Business, New York University, 44 West Fourth Street, Suite 9-190, New York, NY 10012-1126. Phone: (212) 998-0359. Fax: (212) 995-4233. Email: [email protected]. Web: http://www.stern.nyu.edu/∼lpederse/",
"title": ""
},
{
"docid": "1e139fa9673f83ac619a5da53391b1ef",
"text": "In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of “naturalness” in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.",
"title": ""
},
{
"docid": "6037693a098f8f2713b2316c75447a50",
"text": "Presently, monoclonal antibodies (mAbs) therapeutics have big global sales and are starting to receive competition from biosimilars. We previously reported that the nano-surface and molecular-orientation limited (nSMOL) proteolysis which is optimal method for bioanalysis of antibody drugs in plasma. The nSMOL is a Fab-selective limited proteolysis, which utilize the difference of protease nanoparticle diameter (200 nm) and antibody resin pore diameter (100 nm). In this report, we have demonstrated that the full validation for chimeric antibody Rituximab bioanalysis in human plasma using nSMOL proteolysis. The immunoglobulin fraction was collected using Protein A resin from plasma, which was then followed by the nSMOL proteolysis using the FG nanoparticle-immobilized trypsin under a nondenaturing condition at 50°C for 6 h. After removal of resin and nanoparticles, Rituximab signature peptides (GLEWIGAIYPGNGDTSYNQK, ASGYTFTSYNMHWVK, and FSGSGSGTSYSLTISR) including complementarity-determining region (CDR) and internal standard P14R were simultaneously quantified by multiple reaction monitoring (MRM). This quantification of Rituximab using nSMOL proteolysis showed lower limit of quantification (LLOQ) of 0.586 µg/mL and linearity of 0.586 to 300 µg/mL. The intra- and inter-assay precision of LLOQ, low quality control (LQC), middle quality control (MQC), and high quality control (HQC) was 5.45-12.9% and 11.8, 5.77-8.84% and 9.22, 2.58-6.39 and 6.48%, and 2.69-7.29 and 4.77%, respectively. These results indicate that nSMOL can be applied to clinical pharmacokinetics study of Rituximab, based on the precise analysis.",
"title": ""
},
{
"docid": "679eb46c45998897b4f8e641530f44a7",
"text": "Workers in hazardous environments such as mining are constantly exposed to the health and safety hazards of dynamic and unpredictable conditions. One approach to enable them to manage these hazards is to provide them with situational awareness: real-time data (environmental, physiological, and physical location data) obtained from wireless, wearable, smart sensor technologies deployed at the work area. The scope of this approach is limited to managing the hazards of the immediate work area for prevention purposes; it does not include technologies needed after a disaster. Three critical technologies emerge and converge to support this technical approach: smart-wearable sensors, wireless sensor networks, and low-power embedded computing. The major focus of this report is on smart sensors and wireless sensor networks. Wireless networks form the infrastructure to support the realization of situational awareness; therefore, there is a significant focus on wireless networks. Lastly, the “Future Research” section pulls together the three critical technologies by proposing applications that are relevant to mining. The applications are injured miner (person-down) detection; a wireless, wearable remote viewer; and an ultrawide band smart environment that enables localization and tracking of humans and resources. The smart environment could provide location data, physiological data, and communications (video, photos, graphical images, audio, and text messages). Electrical engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA. President, The Designer-III Co., Franklin, PA. General engineer, Pittsburgh Research Laboratory (now with the National Personal Protective Technology Laboratory), National Institute for Occupational Safety and Health, Pittsburgh, PA. Supervisory general engineer, Pittsburgh Research Laboratory, National Institute for Occupational Safety and Health, Pittsburgh, PA.",
"title": ""
},
{
"docid": "c9d3def588f5f3dc95955635ebaa0d3d",
"text": "In this paper we propose a novel computer vision method for classifying human facial expression from low resolution images. Our method uses the bag of words representation. It extracts dense SIFT descriptors either from the whole image or from a spatial pyramid that divides the image into increasingly fine sub-regions. Then, it represents images as normalized (spatial) presence vectors of visual words from a codebook obtained through clustering image descriptors. Linear kernels are built for several choices of spatial presence vectors, and combined into weighted sums for multiple kernel learning (MKL). For machine learning, the method makes use of multi-class one-versus-all SVM on the MKL kernel computed using this representation, but with an important twist, the learning is local, as opposed to global – in the sense that, for each face with an unknown label, a set of neighbors is selected to build a local classification model, which is eventually used to classify only that particular face. Empirical results indicate that the use of presence vectors, local learning and spatial information improve recognition performance together by more than 5%. Finally, the proposed model ranked fourth in the Facial Expression Recognition Challenge, with an accuracy of 67.484% on the final test set. ICML 2013 Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. Copyright 2013 by the author(s).",
"title": ""
},
{
"docid": "152182336e620ee94f24e3865b7b377f",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "3dc1598f8653c540e6e61daf2994b8ed",
"text": "Labeled graphs provide a natural way of representing entities, relationships and structures within real datasets such as knowledge graphs and protein interactions. Applications such as question answering, semantic search, and motif discovery entail efficient approaches for subgraph matching involving both label and structural similarities. Given the NP-completeness of subgraph isomorphism and the presence of noise, approximate graph matching techniques are required to handle queries in a robust and real-time manner. This paper presents a novel technique to characterize the subgraph similarity based on statistical significance captured by chi-square statistic. The statistical significance model takes into account the background structure and label distribution in the neighborhood of vertices to obtain the best matching subgraph and, therefore, robustly handles partial label and structural mismatches. Based on the model, we propose two algorithms, VELSET and NAGA, that, given a query graph, return the top-k most similar subgraphs from a (large) database graph. While VELSET is more accurate and robust to noise, NAGA is faster and more applicable for scenarios with low label noise. Experiments on large real-life graph datasets depict significant improvements in terms of accuracy and running time in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "340a506b8968efa5f775c26fd5841599",
"text": "One of the teaching methods available to teachers in the ‘andragogic’ model of teaching is the method of ‘Socratic Seminars’. This is a teacher-directed form of instruction in which questions are used as the sole method of teaching, placing students in the position of having to recognise the limits of their knowledge, and hopefully, motivating them to learn. This paper aims at initiating the discussion on the strengths and drawbacks of this method. Based on empirical research, the paper suggests that the Socratic method seems to be a very effective method for teaching adult learners, but should be used with caution depending on the personality of the learners.",
"title": ""
}
] |
scidocsrr
|
34bf860168f69f686a922b84967f01fa
|
A Collection of Definitions of Intelligence
|
[
{
"docid": "4fa25fd7088d9b624be75239d02cfc4b",
"text": "Intelligence is defined as that which produces successful behavior. Intelligence is assumed to result from natural selection. A model is proposed that integrates knowledge from research in both natural and artificial systems. The model consists of a hierarchical system architecture wherein: 1) control bandwidth decreases about an order of magnitude at each higher level, 2) perceptual resolution of spatial and temporal patterns contracts about an order-of-magnitude at each higher level, 3) goals expand in scope and planning horizons expand in space and time about an order-of-magnitude at each higher level, and 4) models of the world and memories of events expand their range in space and time by about an order-of-magnitude at each higher level. At each level, functional modules perform behavior generation (task decomposition planning and execution), world modeling, sensory processing, and value judgment. Sensory feedback control loops are closed at every level.",
"title": ""
}
] |
[
{
"docid": "3ec70222394018f1d889692ae850b5ca",
"text": "In this paper, we proposed an automatic method to segment text from complex background for recognition task. First, a rule-based sampling method is proposed to get portion of the text pixels. Then, the sampled pixels are used for training Gaussian mixture models of intensity and hue components in HSI color space. Finally, the trained GMMs together with the spatial connectivity information are used for segment all of text pixels form their background. We used the word recognition rate to evaluate the segmentation result. Experiments results show that the proposed algorithm can work fully automatically and performs much better than the traditional methods.",
"title": ""
},
{
"docid": "e65a6952138af2b7013473d9445b62a0",
"text": "Spectral envelope is one of the most important features that characterize the timbre of an instrument sound. However, it is difficult to use spectral information in the framework of conventional spectrogram decomposition methods. We overcome this problem by suggesting a simple way to provide a constraint on the spectral envelope calculated by linear prediction. In the first part of this study, we use a pre-trained spectral envelope of known instruments as the constraint. Then we apply the same idea to a blind scenario in which the instruments are unknown. The experimental results reveal that the proposed method outperforms the conventional methods.",
"title": ""
},
{
"docid": "5d23af3f778a723b97690f8bf54dfa41",
"text": "Software engineering techniques have been employed for many years to create software products. The selections of appropriate software development methodologies for a given project, and tailoring the methodologies to a specific requirement have been a challenge since the establishment of software development as a discipline. In the late 1990’s, the general trend in software development techniques has changed from traditional waterfall approaches to more iterative incremental development approaches with different combination of old concepts, new concepts, and metamorphosed old concepts. Nowadays, the aim of most software companies is to produce software in short time period with minimal costs, and within unstable, changing environments that inspired the birth of Agile. Agile software development practice have caught the attention of software development teams and software engineering researchers worldwide during the last decade but scientific research and published outcomes still remains quite scarce. Every agile approach has its own development cycle that results in technological, managerial and environmental changes in the software companies. This paper explains the values and principles of ten agile practices that are becoming more and more dominant in the software development industry. Agile processes are not always beneficial, they have some limitations as well, and this paper also discusses the advantages and disadvantages of Agile processes.",
"title": ""
},
{
"docid": "673bf6ecf9ae6fb61f7b01ff284c0a5f",
"text": "We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual question answering.",
"title": ""
},
{
"docid": "b9226935d2802a7b9a23ce159190f525",
"text": "Accurate diagnosis is crucial for successful treatment of the brain tumor. Accordingly in this paper, we propose an intelligent content-based image retrieval (CBIR) system which retrieves similar pathology bearing magnetic resonance (MR) images of the brain from a medical database to assist the radiologist in the diagnosis of the brain tumor. A single feature vector will not perform well for finding similar images in the medical domain as images within the same disease class differ by severity, density and other such factors. To handle this problem, the proposed CBIR system uses a two-step approach to retrieve similar MR images. The first step classifies the query image as benign or malignant using the features that discriminate the classes. The second step then retrieves the most similar images within the predicted class using the features that distinguish the subclasses. In order to provide faster image retrieval, we propose an indexing method called clustering with principal component analysis (PCA) and KD-tree which groups subclass features into clusters using modified K-means clustering and separately reduces the dimensionality of each cluster using PCA. The reduced feature set is then indexed using a KD-tree. The proposed CBIR system is also made robust against misalignment that occurs during MR image acquisition. Experiments were carried out on a database consisting of 820 MR images of the brain tumor. The experimental results demonstrate the effectiveness of the proposed system and show the viability of clinical application.",
"title": ""
},
{
"docid": "ca9da9f8113bc50aaa79d654a9eaf95a",
"text": "Given an ensemble of randomized regression trees, it is possible to restructure them as a collection of multilayered neural networks with particular connection weights. Following this principle, we reformulate the random forest method of Breiman (2001) into a neural network setting, and in turn propose two new hybrid procedures that we call neural random forests. Both predictors exploit prior knowledge of regression trees for their architecture, have less parameters to tune than standard networks, and less restrictions on the geometry of the decision boundaries. Consistency results are proved, and substantial numerical evidence is provided on both synthetic and real data sets to assess the excellent performance of our methods in a large variety of prediction problems. Index Terms — Random forests, neural networks, ensemble methods, randomization, sparse networks. 2010 Mathematics Subject Classification: 62G08, 62G20, 68T05.",
"title": ""
},
{
"docid": "4ae81470bd0e5cd161add0ca5dacaf16",
"text": "Preserving the integrity of application data across updates is difficult if power outages and system crashes may occur during updates. Existing approaches such as relational databases and transactional key-value stores restrict programming flexibility by mandating narrow data access interfaces. We have designed, implemented, and evaluated an approach that strengthens the semantics of a standard operating system primitive while maintaining conceptual simplicity and supporting highly flexible programming: Failureatomic msync() commits changes to a memory-mapped file atomically, even in the presence of failures. Our Linux implementation of failure-atomic msync() has preserved application data integrity across hundreds of whole-machine power interruptions and exhibits good microbenchmark performance on both spinning disks and solid-state storage. Failure-atomic msync() supports higher layers of fully general programming abstraction, e.g., a persistent heap that easily slips beneath the C++ Standard Template Library. An STL <map> built atop failure-atomic msync() outperforms several local key-value stores that support transactional updates. We integrated failure-atomic msync() into the Kyoto Tycoon key-value server by modifying exactly one line of code; our modified server reduces response times by 26--43% compared to Tycoon's existing transaction support while providing the same data integrity guarantees. Compared to a Tycoon server setup that makes almost no I/O (and therefore provides no support for data durability and integrity over failures), failure-atomic msync() incurs a three-fold response time increase on a fast Flash-based SSD---an acceptable cost of data reliability for many.",
"title": ""
},
{
"docid": "496bdd85a0aebb64d2f2b36c2050eb3a",
"text": "This research derives, implements, tunes and compares selected path tracking methods for controlling a car-like robot along a predetermined path. The scope includes commonly used m ethods found in practice as well as some theoretical methods found in various literature from other areas of rese arch. This work reviews literature and identifies important path tracking models and control algorithms from the vast back ground and resources. This paper augments the literature with a comprehensive collection of important path tracking idea s, a guide to their implementations and, most importantly, an independent and realistic comparison of the perfor mance of these various approaches. This document does not catalog all of the work in vehicle modeling and control; only a selection that is perceived to be important ideas when considering practical system identification, ease of implementation/tuning and computational efficiency. There are several other methods that meet this criteria, ho wever they are deemed similar to one or more of the approaches presented and are not included. The performance r esults, analysis and comparison of tracking methods ultimately reveal that none of the approaches work well in all applications a nd that they have some complementary characteristics. These complementary characteristics lead to an idea that a combination of methods may be useful for more general applications. Additionally, applications for which the methods in this paper do not provide adequate solutions are identified.",
"title": ""
},
{
"docid": "638e0059bf390b81de2202c22427b937",
"text": "Oral and gastrointestinal mucositis is a toxicity of many forms of radiotherapy and chemotherapy. It has a significant impact on health, quality of life and economic outcomes that are associated with treatment. It also indirectly affects the success of antineoplastic therapy by limiting the ability of patients to tolerate optimal tumoricidal treatment. The complex pathogenesis of mucositis has only recently been appreciated and reflects the dynamic interactions of all of the cell and tissue types that comprise the epithelium and submucosa. The identification of the molecular events that lead to treatment-induced mucosal injury has provided targets for mechanistically based interventions to prevent and treat mucositis.",
"title": ""
},
{
"docid": "f4e3192354e0eaa9811d429ea916927c",
"text": "This paper explores a behavior planning approach to automatically generate realistic motions for animated characters. Motion clips are abstracted as high-level behaviors and associated with a behavior finite-state machine (FSM) that defines the movement capabilities of a virtual character. During runtime, motion is generated automatically by a planning algorithm that performs a global search of the FSM and computes a sequence of behaviors for the character to reach a user-designated goal position. Our technique can generate interesting animations using a relatively small amount of data, making it attractive for resource-limited game platforms. It also scales efficiently to large motion databases, because the search performance is primarily dependent on the complexity of the behavior FSM rather than on the amount of data. Heuristic cost functions that the planner uses to evaluate candidate motions provide a flexible framework from which an animator can control character preferences for certain types of behavior. We show results of synthesized animations involving up to one hundred human and animal characters planning simultaneously in both static and dynamic environments.",
"title": ""
},
{
"docid": "a6099b515198a9c09e5a7b772bcef412",
"text": "The foundation, accomplishments, and proliferation of behavior therapy have been fueled largely by the movement's grounding in behavioral principles and theories. Ivan P. Pavlov's discovery of conditioning principles was essential to the founding of behavior therapy in the 1950s and continues to be central to modern behavior therapy. Pavlov's major legacy to behavior therapy was his discovery of \"experimental neuroses\", shown by his students M.N. Eroféeva and N.R. Shenger-Krestovnikova to be produced and eliminated through the principles of conditioning and counterconditioning. In this article, the Pavlovian origins of behavior therapy are assessed, and the relevance of conditioning principles to modern behavior therapy are analyzed. It is shown that Pavlovian conditioning represents far more than a systematic basic learning paradigm. It is also an essential theoretical foundation for the theory and practice of behavior therapy.",
"title": ""
},
{
"docid": "79041480e35083e619bd804423459f2b",
"text": "Dynamic pricing is the dynamic adjustment of prices to consumers depending upon the value these customers attribute to a product or service. Today’s digital economy is ready for dynamic pricing; however recent research has shown that the prices will have to be adjusted in fairly sophisticated ways, based on sound mathematical models, to derive the benefits of dynamic pricing. This article attempts to survey different models that have been used in dynamic pricing. We first motivate dynamic pricing and present underlying concepts, with several examples, and explain conditions under which dynamic pricing is likely to succeed. We then bring out the role of models in computing dynamic prices. The models surveyed include inventory-based models, data-driven models, auctions, and machine learning. We present a detailed example of an e-business market to show the use of reinforcement learning in dynamic pricing.",
"title": ""
},
{
"docid": "f86a64373a8a4bb510b92f5c38ed403e",
"text": "In recent years, in-memory key-value storage systems have become more and more popular in solving real-time and interactive tasks. Compared with disks, memories have much higher throughput and lower latency which enables them to process data requests with much higher performance. However, since memories have much smaller capacity than disks, how to expand the capacity of in-memory storage system while maintain its high performance become a crucial problem. At the same time, since data in memories are non-persistent, the data may be lost when the system is down. In this paper, we make a case study with Redis, which is one popular in-memory key-value storage system. We find that although the latest release of Redis support clustering so that data can be stored in distributed nodes to support a larger storage capacity, its performance is limited by its decentralized design that clients usually need two connections to get their request served. To make the system more scalable, we propose a Clientside Key-to-Node Caching method that can help direct request to the right service node. Experimental results show that by applying this technique, it can significantly improve the system's performance by near 2 times. We also find that although Redis supports data replication on slave nodes to ensure data safety, it still gets a chance of losing a part of the data due to a weak consistency between master and slave nodes that its defective order of data replication and request reply may lead to losing data without notifying the client. To make it more reliable, we propose a Master-slave Semi Synchronization method which utilizes TCP protocol to ensure the order of data replication and request reply so that when a client receives an \"OK\" message, the corresponding data must have been replicated. With a significant improvement in data reliability, its performance overhead is limited within 5%.",
"title": ""
},
{
"docid": "f99d0e24dece8b2de287b7d86c483f83",
"text": "Recently, the Task Force on Process Mining released the Process Mining Manifesto. The manifesto is supported by 53 organizations and 77 process mining experts contributed to it. The active contributions from end-users, tool vendors, consultants, analysts, and researchers illustrate the growing relevance of process mining as a bridge between data mining and business process modeling. This paper summarizes the manifesto and explains why process mining is a highly relevant, but also very challenging, research area. This way we hope to stimulate the broader ACM SIGKDD community to look at process-centric knowledge discovery.",
"title": ""
},
{
"docid": "b7e5028bc6936e75692fbcebaacebb2c",
"text": "dent processes and domain-specific knowledge. Until recently, information extraction has leaned heavily on domain knowledge, which requires either manual engineering or manual tagging of examples (Miller et al. 1998; Soderland 1999; Culotta, McCallum, and Betz 2006). Semisupervised approaches (Riloff and Jones 1999, Agichtein and Gravano 2000, Rosenfeld and Feldman 2007) require only a small amount of hand-annotated training, but require this for every relation of interest. This still presents a knowledge engineering bottleneck, when one considers the unbounded number of relations in a diverse corpus such as the web. Shinyama and Sekine (2006) explored unsupervised relation discovery using a clustering algorithm with good precision, but limited scalability. The KnowItAll research group is a pioneer of a new paradigm, Open IE (Banko et al. 2007, Banko and Etzioni 2008), that operates in a totally domain-independent manner and at web scale. An Open IE system makes a single pass over its corpus and extracts a diverse set of relational tuples without requiring any relation-specific human input. Open IE is ideally suited to corpora such as the web, where the target relations are not known in advance and their number is massive. Articles",
"title": ""
},
{
"docid": "67ae045b8b9a8e181ed0a33b204528cf",
"text": "We report four experiments examining effects of instance similarity on the application of simple explicit rules. We found effects of similarity to illustrative exemplars in error patterns and reaction times. These effects arose even though participants were given perfectly predictive rules, the similarity manipulation depended entirely on rule-irrelevant features, and attention to exemplar similarity was detrimental to task performance. Comparison of results across studies suggests that the effects are mandatory, non-strategic and not subject to conscious control, and as a result, should be pervasive throughout categorization.",
"title": ""
},
{
"docid": "01a4b2be52e379db6ace7fa8ed501805",
"text": "The goal of our work is to complete the depth channel of an RGB-D image. Commodity-grade depth cameras often fail to sense depth for shiny, bright, transparent, and distant surfaces. To address this problem, we train a deep network that takes an RGB image as input and predicts dense surface normals and occlusion boundaries. Those predictions are then combined with raw depth observations provided by the RGB-D camera to solve for depths for all pixels, including those missing in the original observation. This method was chosen over others (e.g., inpainting depths directly) as the result of extensive experiments with a new depth completion benchmark dataset, where holes are filled in training data through the rendering of surface reconstructions created from multiview RGB-D scans. Experiments with different network inputs, depth representations, loss functions, optimization methods, inpainting methods, and deep depth estimation networks show that our proposed approach provides better depth completions than these alternatives.",
"title": ""
},
{
"docid": "336db7a816be8b331cffe7d5b7d7a365",
"text": "In this correspondence we present a special class of quasi-cyclic low-density parity-check (QC-LDPC) codes, called block-type LDPC (B-LDPC) codes, which have an efficient encoding algorithm due to the simple structure of their parity-check matrices. Since the parity-check matrix of a QC-LDPC code consists of circulant permutation matrices or the zero matrix, the required memory for storing it can be significantly reduced, as compared with randomly constructed LDPC codes. We show that the girth of a QC-LDPC code is upper-bounded by a certain number which is determined by the positions of circulant permutation matrices. The B-LDPC codes are constructed as irregular QC-LDPC codes with parity-check matrices of an almost lower triangular form so that they have an efficient encoding algorithm, good noise threshold, and low error floor. Their encoding complexity is linearly scaled regardless of the size of circulant permutation matrices.",
"title": ""
}
] |
scidocsrr
|
b6a59289dae5f1995adc6173b3928c57
|
Blockchain consensus mechanisms-the case of natural disasters
|
[
{
"docid": "ed41127bf43b4f792f8cbe1ec652f7b2",
"text": "Today, more than 100 blockchain projects created to transform government systems are being conducted in more than 30 countries. What leads countries rapidly initiate blockchain projects? I argue that it is because blockchain is a technology directly related to social organization; Unlike other technologies, a consensus mechanism form the core of blockchain. Traditionally, consensus is not the domain of machines but rather humankind. However, blockchain operates through a consensus algorithm with human intervention; once that consensus is made, it cannot be modified or forged. Through utilization of Lawrence Lessig’s proposition that “Code is law,” I suggest that blockchain creates “absolute law” that cannot be violated. This characteristic of blockchain makes it possible to implement social technology that can replace existing social apparatuses including bureaucracy. In addition, there are three close similarities between blockchain and bureaucracy. First, both of them are defined by the rules and execute predetermined rules. Second, both of them work as information processing machines for society. Third, both of them work as trust machines for society. Therefore, I posit that it is possible and moreover unavoidable to replace bureaucracy with blockchain systems. In conclusion, I suggest five principles that should be adhered to when we replace bureaucracy with the blockchain system: 1) introducing Blockchain Statute law; 2) transparent disclosure of data and source code; 3) implementing autonomous executing administration; 4) building a governance system based on direct democracy and 5) making Distributed Autonomous Government(DAG).",
"title": ""
}
] |
[
{
"docid": "3ef661f930df369767a7da8a192df85f",
"text": "We present MVE, the Multi-View Environment. MVE is an end-to-end multi-view geometry reconstruction software which takes photos of a scene as input and produces a surface triangle mesh as result. The system covers a structure-from-motion algorithm, multi-view stereo reconstruction, generation of extremely dense point clouds, and reconstruction of surfaces from point clouds. In contrast to most image-based geometry reconstruction approaches, our system is focused on reconstruction of multi-scale scenes, an important aspect in many areas such as cultural heritage. It allows to reconstruct large datasets containing some detailed regions with much higher resolution than the rest of the scene. Our system provides a graphical user interface for structure-from-motion reconstruction, visual inspection of images, depth maps, and rendering of scenes and meshes.",
"title": ""
},
{
"docid": "cc3b36d8026396a7a931f07ef9d3bcfb",
"text": "Planning an itinerary before traveling to a city is one of the most important travel preparation activities. In this paper, we propose a novel framework called TripPlanner, leveraging a combination of location-based social network (i.e., LBSN) and taxi GPS digital footprints to achieve personalized, interactive, and traffic-aware trip planning. First, we construct a dynamic point-of-interest network model by extracting relevant information from crowdsourced LBSN and taxi GPS traces. Then, we propose a two-phase approach for personalized trip planning. In the route search phase, TripPlanner works interactively with users to generate candidate routes with specified venues. In the route augmentation phase, TripPlanner applies heuristic algorithms to add user's preferred venues iteratively to the candidate routes, with the objective of maximizing the route score while satisfying both the venue visiting time and total travel time constraints. To validate the efficiency and effectiveness of the proposed approach, extensive empirical studies were performed on two real-world data sets from the city of San Francisco, which contain more than 391 900 passenger delivery trips generated by 536 taxis in a month and 110 214 check-ins left by 15 680 Foursquare users in six months.",
"title": ""
},
{
"docid": "82b628f4ce9e3d4a7ef8db114340e191",
"text": "Cervical cancer (CC) is a leading cause of death in women worldwide. Radiation therapy (RT) for CC is an effective alternative, but its toxicity remains challenging. Blueberry is amongst the most commonly consumed berries in the United States. We previously showed that resveratrol, a compound in red grapes, can be used as a radiosensitizer for prostate cancer. In this study, we found that the percentage of colonies, PCNA expression level and the OD value of cells from the CC cell line SiHa were all decreased in RT/Blueberry Extract (BE) group when compared to those in the RT alone group. Furthermore, TUNEL+ cells and the relative caspase-3 activity in the CC cells were increased in the RT/BE group compared to those in the RT alone group. The anti-proliferative effect of RT/BE on cancer cells correlated with downregulation of pro-proliferative molecules cyclin D and cyclin E. The pro-apoptotic effect of RT/BE correlated with upregulation of the pro-apoptotic molecule TRAIL. Thus, BE sensitizes SiHa cells to RT by inhibition of proliferation and promotion of apoptosis, suggesting that blueberry might be used as a potential radiosensitizer to treat CC.",
"title": ""
},
{
"docid": "6723049ea783b15426dc5335872e4f75",
"text": "A method of using magnetic torque rods to do 3axis spacecraft attitude control has been developed. The goal of this system is to achieve a nadir pointing accuracy on the order of 0.1 to 1.0 deg without the need for thrusters or wheels. The open-loop system is under-actuated because magnetic torque rods cannot torque about the local magnetic field direction. This direction moves in space as the spacecraft moves along an inclined orbit, and the resulting system is roughly periodic. Periodic controllers are designed using an asymptotic linear quadratic regulator technique. The control laws include integral action and saturation logic. This system's performance has been studied via analysis and simulation. The resulting closed-loop systems are robust with respect to parametric modeling uncertainty. They converge from initial attitude errors of 30 deg per axis, and they achieve steady-state pointing errors on the order of 0.5 to 1.0 deg in the presence of drag torques and unmodeled residual dipole moments. Introduction All spacecraft have an attitude stabilization system. They range from passive spin-stabilized 1 or gravitygradient stabilized 2 systems to fully active three-axis controlled systems . Pointing accuracies for such systems may range from 10 deg down to 10 deg or better, depending on the spacecraft design and on the types of sensors and actuators that it carries. The most accurate designs normally include momentum wheels or reaction wheels. This paper develops an active 3-axis attitude stabilization system for a nadir-pointing spacecraft. It uses only magnetic torque rods as actuators. Additional components of the system include appropriate attitude sensors and a magnetometer. The goal of this system is to achieve pointing accuracy that is better than a gravity gradient stabilization system, on the order of 0.1 to 1 deg. Such a system will weigh less than either a gravity-gradient system or a wheelbased system, and it will use less power than a wheel∗ Associate Professor, Sibley School of Mech. & Aero. Engr. Associate Fellow, AIAA. Copyright 2000 by Mark L. Psiaki. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. based system. Thus, it will be ideal for small satellite applications, where weight and power budgets are severely restricted. There are two classic uses of magnetic torque rods in attitude control. One is for momentum management of wheel-based systems . The other is for angularmomentum and nutation control of spinning , momentum-biased , and dual-spin spacecraft . The present study is one of a growing number that consider active 3-axis magnetic attitude stabilization of a nadir-pointing spacecraft . Reference 5 also should be classified with this group because it uses similar techniques. Reference 7, the earliest such study, presents a 3-axis proportional-derivative control law. It computes a desired torque and projects it perpendicular to the Earth's magnetic field in order to determine the actual torque. Projection is necessary because the magnetic torque, nm, takes the form nm = m × b (1) where m is the magnetic dipole moment vector of the torque rods and b is the Earth's magnetic field. Equation (1) highlights the principal problem of magnetic-torque-based 3-axis attitude control: the system is under-actuated. A rigid spacecraft has 3 rotational degrees of freedom, but the torque rods can only torque about the 2 axes that are perpendicular to the magnetic field vector. The system is controllable if the orbit is inclined because the Earth's magnetic field vector rotates in space as the spacecraft moves around its orbit. It is a time-varying system that is approximately periodic. This system's under-actuation and its periodicity combine to create a challenging feedback controller design problem. The present problem is different from the problem of attitude control when thrusters or reaction wheels provide torque only about 2 axes. References 15 and 16 and others have addressed this alternate problem, in which the un-actuated direction is defined in spacecraft coordinates. For magnetic torques, the un-actuated direction does not rotate with the spacecraft. Various control laws have been considered for magnetic attitude control systems. Some of the controllers are similar to the original controller of Martel et al. . Time-varying Linear Quadratic Regulator (LQR) formulations have been tried , as has fuzzy control 9 and sliding-mode control . References 9 and 13 patch together solutions of time-",
"title": ""
},
{
"docid": "8519922a8cbb71f4c9ba8959731ce61d",
"text": "Convolutional neural networks (CNNs) have recently been applied successfully in large scale image classification competitions for photographs found on the Internet. As our brains are able to recognize objects in the images, there must be some regularities in the data that a neural network can utilize. These regularities are difficult to find an explicit set of rules for. However, by using a CNN and the backpropagation algorithm for learning, the neural network can learn to pick up on the features in the images that are characteristic for each class. Also, data regularities that are not visually obvious to us can be learned. CNNs are particularly useful for classifying data containing some spatial structure, like photographs and speech. In this paper, the technique is tested on SAR images of ships in harbour. The tests indicate that CNNs are promising methods for discriminating between targets in SAR images. However, the false alarm rate is quite high when introducing confusers in the tests. A big challenge in the development of target classification algorithms, especially in the case of SAR, is the lack of real data. This paper also describes tests using simulated SAR images of the same target classes as the real data in order to fill this data gap. The simulated images are made with the MOCEM software (developed by DGA), based on CAD models of the targets. The tests performed here indicate that simulated data can indeed be helpful in training a convolutional neural network to classify real SAR images.",
"title": ""
},
{
"docid": "abf47e7d497c83b015ad0ba818e17847",
"text": "The staggering amounts of content readily available to us via digital channels can often appear overwhelming. While much research has focused on aiding people at selecting relevant articles to read, only few approaches have been developed to assist readers in more efficiently reading an individual text. In this paper, we present HiText, a simple yet effective way of dynamically marking parts of a document in accordance with their salience. Rather than skimming a text by focusing on randomly chosen sentences, students and other readers can direct their attention to sentences determined to be important by our system. For this, we rely on a deep learning-based sentence ranking method. Our experiments show that this results in marked increases in user satisfaction and reading efficiency, as assessed using TOEFL-style reading comprehension tests.",
"title": ""
},
{
"docid": "9c41df95c11ec4bed3e0b19b20f912bb",
"text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.",
"title": ""
},
{
"docid": "e9d987351816570b29d0144a6a7bd2ae",
"text": "One’s state of mind will influence her perception of the world and people within it. In this paper, we explore attitudes and behaviors toward online social media based on whether one is depressed or not. We conducted semistructured face-to-face interviews with 14 active Twitter users, half of whom were depressed and the other half non-depressed. Our results highlight key differences between the two groups in terms of perception towards online social media and behaviors within such systems. Non-depressed individuals perceived Twitter as an information consuming and sharing tool, while depressed individuals perceived it as a tool for social awareness and emotional interaction. We discuss several design implications for future social networks that could better accommodate users with depression and provide insights towards helping depressed users meet their needs through online social media.",
"title": ""
},
{
"docid": "f1ac14dd7efc1ef56d5aa51de465ee50",
"text": "The problem of discovering association rules has received considerable research attention and several fast algorithms for mining association rules have been developed. In practice, users are often interested in a subset of association rules. For example, they may only want rules that contain a specific item or rules that contain children of a specific item in a hierarchy. While such constraints can be applied as a postprocessing step, integrating them into the mining algorithm can dramatically reduce the execution time. We consider the problem of integrating constraints that n..,, l.....l,.... ,....,,....:,,, -1.~.. cl., -..s..a..-m e.. ..l.“,“, CUG Y”“Ac;Qu GnpLz:I)DIVua “YGI “Us: pGYaLcG “I OLJDciliLG of items into the association discovery algorithm. We present three integrated algorithms for mining association rules with item constraints and discuss their tradeoffs.",
"title": ""
},
{
"docid": "fdb23d6b43ef07761d90c3faeaefce5d",
"text": "With the advent of big data phenomenon in the world of data and its related technologies, the developments on the NoSQL databases are highly regarded. It has been claimed that these databases outperform their SQL counterparts. The aim of this study is to investigate the claim by evaluating the document-oriented MongoDB database with SQL in terms of the performance of common aggregated and non-aggregate queries. We designed a set of experiments with a huge number of operations such as read, write, delete, and select from various aspects in the two databases and on the same data for a typical e-commerce schema. The results show that MongoDB performs better for most operations excluding some aggregate functions. The results can be a good source for commercial and non-commercial companies eager to change the structure of the database used to provide their line-of-business services.",
"title": ""
},
{
"docid": "9f87ea8fd766f4b208ac142dcbbed4b2",
"text": "The dynamic marketplace in online advertising calls for ranking systems that are optimized to consistently promote and capitalize better performing ads. The streaming nature of online data inevitably makes an advertising system choose between maximizing its expected revenue according to its current knowledge in short term (exploitation) and trying to learn more about the unknown to improve its knowledge (exploration), since the latter might increase its revenue in the future. The exploitation and exploration (EE) tradeoff has been extensively studied in the reinforcement learning community, however, not been paid much attention in online advertising until recently. In this paper, we develop two novel EE strategies for online advertising. Specifically, our methods can adaptively balance the two aspects of EE by automatically learning the optimal tradeoff and incorporating confidence metrics of historical performance. Within a deliberately designed offline simulation framework we apply our algorithms to an industry leading performance based contextual advertising system and conduct extensive evaluations with real online event log data. The experimental results and detailed analysis reveal several important findings of EE behaviors in online advertising and demonstrate that our algorithms perform superiorly in terms of ad reach and click-through-rate (CTR).",
"title": ""
},
{
"docid": "398effb89faa1ac819ee5ae489908ed1",
"text": "There are many interpretations of quantum mechanics, and new ones continue to appear. The Many-Worlds Interpretation (MWI) introduced by Everett (1957) impresses me as the best candidate for the interpretation of quantum theory. My belief is not based on a philosophical affinity for the idea of plurality of worlds as in Lewis (1986), but on a judgment that the physical difficulties of other interpretations are more serious. However, the scope of this paper does not allow a comparative analysis of all alternatives, and my main purpose here is to present my version of MWI, to explain why I believe it is true, and to answer some common criticisms of MWI. The MWI is not a theory about many objective “worlds”. A mathematical formalism by itself does not define the concept of a “world”. The “world” is a subjective concept of a sentient observer. All (subjective) worlds are incorporated in one objective Universe. I think, however, that the name Many-Worlds Interpretation does represent this theory fairly well. Indeed, according to MWI (and contrary to the standard approach) there are many worlds of the sort we call in everyday life “the world”. And although MWI is not just an interpretation of quantum theory – it differs from the standard quantum theory in certain experimental predictions – interpretation is an essential part of MWI; it explains the tremendous gap between what we experience as our world and what appears in the formalism of the quantum state of the Universe. Schrödinger’s equation (the basic equation of quantum theory) predicts very accurately the results of experiments performed on microscopic systems. I shall argue in what follows that it also implies the existence of many worlds. The purpose of addition of the collapse postulate, which represents the difference between MWI and the standard approach, is to escape the implications of Schrödinger’s equation for the existence of many worlds. Today’s technology does not allow us to test the existence of the “other” worlds. So only God or “superman” (i.e., a superintelligence equipped with supertechnology) can take full",
"title": ""
},
{
"docid": "44f0a3e73ce1da840546600fde7fbabd",
"text": "Suggested Citation: Berens, Johannes; Oster, Simon; Schneider, Kerstin; Burghoff, Julian (2018) : Early Detection of Students at Risk Predicting Student Dropouts Using Administrative Student Data and Machine Learning Methods, Schumpeter Discussion Papers, No. 2018-006, University of Wuppertal, Schumpeter School of Business and Economics, Wuppertal, http://nbn-resolving.de/urn:nbn:de:hbz:468-20180719-085420-5",
"title": ""
},
{
"docid": "3abf10f8539840b1830f14d83a7d3ab0",
"text": "We consider two questions at the heart of machine learning; how can we predict if a minimum will generalize to the test set, and why does stochastic gradient descent find minima that generalize well? Our work responds to Zhang et al. (2016), who showed deep neural networks can easily memorize randomly labeled training data, despite generalizing well on real labels of the same inputs. We show that the same phenomenon occurs in small linear models. These observations are explained by the Bayesian evidence, which penalizes sharp minima but is invariant to model parameterization. We also demonstrate that, when one holds the learning rate fixed, there is an optimum batch size which maximizes the test set accuracy. We propose that the noise introduced by small mini-batches drives the parameters towards minima whose evidence is large. Interpreting stochastic gradient descent as a stochastic differential equation, we identify the “noise scale” g = (NB −1) ≈ N/B, where is the learning rate, N the training set size and B the batch size. Consequently the optimum batch size is proportional to both the learning rate and the size of the training set, Bopt ∝ N . We verify these predictions empirically.",
"title": ""
},
{
"docid": "70f672268ae0b3e0e344a4f515057e6b",
"text": "Murder-suicide, homicide-suicide, and dyadic death all refer to an incident where a homicide is committed followed by the perpetrator's suicide almost immediately or soon after the homicide. Homicide-suicides are relatively uncommon and vary from region to region. In the selected literature that we reviewed, shooting was the common method of killing and suicide, and only 3 cases of homicidal hanging involving child victims were identified. We present a case of dyadic death where the method of killing and suicide was hanging, and the victim was a young woman.",
"title": ""
},
{
"docid": "89d59a76e93339e1d779146d9ffbd41a",
"text": "Serious Games (SGs) are gaining an ever increasing interest for education and training. Exploiting the latest simulation and visualization technologies, SGs are able to contextualize the player’s experience in challenging, realistic environments, supporting situated cognition. However, we still miss methods and tools for effectively and deeply infusing pedagogy and instruction inside digital games. After presenting an overview of the state of the art of the SG taxonomies, the paper introduces the pedagogical theories and models most relevant to SGs and their implications on SG design. We also present a schema for a proper integration of games in education, supporting different goals in different steps of a formal education process. By analyzing a set of well-established SGs and formats, the paper presents the main mechanics and models that are being used in SG designs, with a particular focus on assessment, feedback and learning analytics. An overview of tools and models for SG design is also presented. Finally, based on the performed analysis, indications for future research in the field are provided.",
"title": ""
},
{
"docid": "c9f6de422e349ac1319b1017d2a6547b",
"text": "This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short-term impacts of increased openness appear mostly socially beneficial in expectation. The strategic implications of medium and long-term impacts are complex. The evaluation of long-term impacts, in particular, may depend on whether the objective is to benefit the present generation or to promote a time-neutral aggregate of well-being of future generations. Some forms of openness are plausibly positive on both counts (openness about safety measures, openness about goals). Others (openness about source code, science, and possibly capability) could lead to a tightening of the competitive situation around the time of the introduction of advanced AI, increasing the probability that winning the AI race is incompatible with using any safety method that incurs a delay or limits performance. We identify several key factors that must be taken into account by any well-founded opinion on the matter. Policy Implications • The global desirability of openness in AI development – sharing e.g. source code, algorithms, or scientific insights – depends – on complex tradeoffs. • A central concern is that openness could exacerbate a racing dynamic: competitors trying to be the first to develop advanced (superintelligent) AI may accept higher levels of existential risk in order to accelerate progress. • Openness may reduce the probability of AI benefits being monopolized by a small group, but other potential political consequences are more problematic. • Partial openness that enables outsiders to contribute to an AI project’s safety work and to supervise organizational plans and goals appears desirable. The goal of this paper is to conduct a preliminary analysis of the long-term strategic implications of openness in AI development. What effects would increased openness in AI development have, on the margin, on the long-term impacts of AI? Is the expected value for society of these effects positive or negative? Since it is typically impossible to provide definitive answers to this type of question, our ambition here is more modest: to introduce some relevant considerations and develop some thoughts on their weight and plausibility. Given recent interest in the topic of openness in AI and the absence (to our knowledge) of any academic work directly addressing this issue, even this modest ambition would offer scope for a worthwhile contribution. Openness in AI development can refer to various things. For example, we could use this phrase to refer to open source code, open science, open data, or to openness about safety techniques, capabilities, and organizational goals, or to a non-proprietary development regime generally. We will have something to say about each of those different aspects of openness – they do not all have the same strategic implications. But unless we specify otherwise, we will use the shorthand ‘openness’ to refer to the practice of releasing into the public domain (continuously and as promptly as is practicable) all relevant source code and platforms and publishing freely about algorithms and scientific insights and ideas gained in the course of the research. Currently, most leading AI developers operate with a high but not maximal degree of openness. AI researchers at Google, Facebook, Microsoft and Baidu regularly present their latest work at technical conferences and post it on preprint servers. So do researchers in academia. Sometimes, but not always, these publications are accompanied by a release of source code, which makes it easier for outside researchers to replicate the work and build on it. Each of the aforementioned companies have developed and released under open source licences source code for platforms that help researchers (and students and other interested folk) implement machine learning architectures. The movement of staff and interns is another important vector for the spread of ideas. The recently announced OpenAI initiative even has openness explicitly built into its brand identity. Global Policy (2017) doi: 10.1111/1758-5899.12403 © 2017 The Authors Global Policy published by Durham University and John Wiley & Sons, Ltd. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Global Policy",
"title": ""
},
{
"docid": "69093927f11b5028f86322b458889596",
"text": "Although artificial neural network (ANN) usually reaches high classification accuracy, the obtained results sometimes may be incomprehensible. This fact is causing a serious problem in data mining applications. The rules that are derived from ANN are needed to be formed to solve this problem and various methods have been improved to extract these rules. Activation function is critical as the behavior and performance of an ANN model largely depends on it. So far there have been limited studies with emphasis on setting a few free parameters in the neuron activation function. ANN’s with such activation function seem to provide better fitting properties than classical architectures with fixed activation function neurons [Xu, S., & Zhang, M. (2005). Data mining – An adaptive neural network model for financial analysis. In Proceedings of the third international conference on information technology and applications]. In this study a new method that uses artificial immune systems (AIS) algorithm has been presented to extract rules from trained adaptive neural network. Two real time problems data were investigated for determining applicability of the proposed method. The data were obtained from University of California at Irvine (UCI) machine learning repository. The datasets were obtained from Breast Cancer disease and ECG data. The proposed method achieved accuracy values 94.59% and 92.31% for ECG and Breast Cancer dataset, respectively. It has been observed that these results are one of the best results comparing with results obtained from related previous studies and reported in UCI web sites. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "28bb04440e9f5d0bfe465ec9fe685eda",
"text": "Model transformations are at the heart of model driven engineering (MDE) and can be used in many different application scenarios. For instance, model transformations are used to integrate very large models. As a consequence, they are becoming more and more complex. However, these transformations are still developed manually. Several code patterns are implemented repetitively, increasing the probability of programming errors and reducing code reusability. There is not yet a complete solution that automates the development of model transformations. In this paper we propose a novel approach that uses matching transformations and weaving models to semi-automate the development of transformations. Matching transformations are a special kind of transformations that implement heuristics and algorithms to create weaving models. Weaving models are models that capture different kinds of relationships between models. Our solution enables to rapidly implement and to customize these heuristics. We combine different heuristics, and we propose a new metamodel-based heuristic that exploits metamodel data to automatically produce weaving models. The weaving models are derived into model integration transformations.",
"title": ""
},
{
"docid": "1919bad34819f8f1d92b53c04b6a3c85",
"text": "Reviews keep playing an increasingly important role in the decision process of buying products and booking hotels. However, the large amount of available information can be confusing to users. A more succinct interface, gathering only the most helpful reviews, can reduce information processing time and save effort. To create such an interface in real time, we need reliable prediction algorithms to classify and predict new reviews which have not been voted but are potentially helpful. So far such helpfulness prediction algorithms have benefited from structural aspects, such as the length and readability score. Since emotional words are at the heart of our written communication and are powerful to trigger listeners’ attention, we believe that emotional words can serve as important parameters for predicting helpfulness of review text. Using GALC, a general lexicon of emotional words associated with a model representing 20 different categories, we extracted the emotionality from the review text and applied supervised classification method to derive the emotion-based helpful review prediction. As the second contribution, we propose an evaluation framework comparing three different real-world datasets extracted from the most well-known product review websites. This framework shows that emotion-based methods are outperforming the structure-based approach, by up to 9%.",
"title": ""
}
] |
scidocsrr
|
c673eb37924c047f8fbf60ff49c0004b
|
Indoor and Outdoor Depth Imaging of Leaves With Time of Flight and stereo vision Sensors : Analysis and Comparison
|
[
{
"docid": "a1915a869616b9c8c2547f66ec89de13",
"text": "The harvest yield in vineyards can vary significantly from year to year and also spatially within plots due to variations in climate, soil conditions and pests. Fine grained knowledge of crop yields can allow viticulturists to better manage their vineyards. The current industry practice for yield prediction is destructive, expensive and spatially sparse - during the growing season sparse samples are taken and extrapolated to determine overall yield. We present an automated method that uses computer vision to detect and count grape berries. The method could potentially be deployed across large vineyards taking measurements at every vine in a non-destructive manner. Our berry detection uses both shape and visual texture and we can demonstrate detection of green berries against a green leaf background. Berry detections are counted and the eventual harvest yield is predicted. Results are presented for 224 vines (over 450 meters) of two different grape varieties and compared against the actual harvest yield as groundtruth. We calibrate our berry count to yield and find that we can predict yield of individual vineyard rows to within 9.8% of actual crop weight.",
"title": ""
},
{
"docid": "ab152b8a696519abb4406dd8f7c15407",
"text": "While real scenes produce a wide range of brightness variations, vision systems use low dynamic range image detectors that typically provide 8 bits of brightness data at each pixel. The resulting low quality images greatly limit what vision can accomplish today. This paper proposes a very simple method for significantly enhancing the dynamic range of virtually any imaging system. The basic principle is to simultaneously sample the spatial and exposure dimensions of image irradiance. One of several ways to achieve this is by placing an optical mask adjacent to a conventional image detector array. The mask has a pattern with spatially varying transmittance, thereby giving adjacent pixels on the detector different exposures to the scene. The captured image is mapped to a high dynamic range image using an efficient image reconstruction algorithm. The end result is an imaging system that can measure a very wide range of scene radiances and produce a substantially larger number of brightness levels, with a slight reduction in spatial resolution. We conclude with several examples of high dynamic range images computed using spatially varying pixel exposures. 1 High Dynamic Range Imaging Any real-world scene has a significant amount of brightness variation within it. The human eye has a remarkable dynamic range that enables it to detect subtle contrast variations and interpret scenes under a large variety of illumination conditions [Blackwell, 1946]. In contrast, a typical video camera, or a digital still camera, provides only about 8 bits (256 levels) of brightness information at each pixel. As a result, virtually any image captured by a conventional imaging system ends up being too dark in some areas and possibly saturated in others. In computational vision, it is such low quality images that we are left with the task of interpreting. Clearly, the low dynamic range of existing image detectors poses a severe limitation on what computational vision can accomplish. This paper presents a very simple modification that can be made to any conventional imaging system to dramatically increases its dynamic range. The availability of extra bits of data at each image pixel is expected to enhance the robustness of vision algorithms. This work was supported in part by an ONR/DARPA MURI grant under ONR contract No. N00014-97-1-0553 and in part by a David and Lucile Packard Fellowship. Tomoo Mitsunaga is supported by the Sony Corporation. 2 Existing Approaches First, we begin with a brief summary of existing techniques for capturing a high dynamic range image with a low dynamic range image detector. 2.1 Sequential Exposure Change The most obvious approach is to sequentially capture multiple images of the same scene using different exposures. The exposure for each image is controlled by either varying the F-number of the imaging optics or the exposure time of the image detector. Clearly, a high exposure image will be saturated in the bright scene areas but capture the dark regions well. In contrast, a low exposure image will have less saturation in bright regions but end up being too dark and noisy in the dark areas. The complementary nature of these images allows one to combine them into a single high dynamic range image. Such an approach has been employed in [Azuma and Morimura, 1996], [Saito, 1995], [Konishi et al., 1995], [Morimura, 1993], [Ikeda, 1998], [Takahashi et al., 1997], [Burt and Kolczynski, 1993], [Madden, 1993] [Tsai, 1994]. In [Mann and Picard, 1995], [Debevec and Malik, 1997] and [Mitsunaga and Nayar, 1999] this approach has been taken one step further by using the acquired images to compute the radiometric response function of the imaging system. The above methods are of course suited only to static scenes; the imaging system, the scene objects and their radiances must remain constant during the sequential capture of images under different exposures. 2.2 Multiple Image Detectors The stationary scene restriction faced by sequential capture is remedied by using multiple imaging systems. This approach has been taken by several investigators [Doi et al., 1986], [Saito, 1995], [Saito, 1996], [Kimura, 1998], [Ikeda, 1998]. Beam splitters are used to generate multiple copies of the optical image of the scene. Each copy is detected by an image detector whose exposure is preset by using an optical attenuator or by changing the exposure time of the detector. This approach has the advantage of producing high dynamic range images in real time. Hence, the scene objects and the imaging system are free to move during the capture process. The disadvantage of course is that this approach is expensive as it requires multiple image detectors, precision optics for the alignment of all the acquired images and additional hardware for the capture and processing of multiple images. 1063-6919/00 $10.0",
"title": ""
}
] |
[
{
"docid": "a35aa35c57698d2518e3485ec7649c66",
"text": "The review paper describes the application of various image processing techniques for automatic detection of glaucoma. Glaucoma is a neurodegenerative disorder of the optic nerve, which causes partial loss of vision. Large number of people suffers from eye diseases in rural and semi urban areas all over the world. Current diagnosis of retinal disease relies upon examining retinal fundus image using image processing. The key image processing techniques to detect eye diseases include image registration, image fusion, image segmentation, feature extraction, image enhancement, morphology, pattern matching, image classification, analysis and statistical measurements. KeywordsImage Registration; Fusion; Segmentation; Statistical measures; Morphological operation; Classification Full Text: http://www.ijcsmc.com/docs/papers/November2013/V2I11201336.pdf",
"title": ""
},
{
"docid": "946e5205a93f71e0cfadf58df186ef7e",
"text": "Face recognition has made extraordinary progress owing to the advancement of deep convolutional neural networks (CNNs). The central task of face recognition, including face verification and identification, involves face feature discrimination. However, the traditional softmax loss of deep CNNs usually lacks the power of discrimination. To address this problem, recently several loss functions such as center loss, large margin softmax loss, and angular softmax loss have been proposed. All these improved losses share the same idea: maximizing inter-class variance and minimizing intra-class variance. In this paper, we propose a novel loss function, namely large margin cosine loss (LMCL), to realize this idea from a different perspective. More specifically, we reformulate the softmax loss as a cosine loss by L2 normalizing both features and weight vectors to remove radial variations, based on which a cosine margin term is introduced to further maximize the decision margin in the angular space. As a result, minimum intra-class variance and maximum inter-class variance are achieved by virtue of normalization and cosine decision margin maximization. We refer to our model trained with LMCL as CosFace. Extensive experimental evaluations are conducted on the most popular public-domain face recognition datasets such as MegaFace Challenge, Youtube Faces (YTF) and Labeled Face in the Wild (LFW). We achieve the state-of-the-art performance on these benchmarks, which confirms the effectiveness of our proposed approach.",
"title": ""
},
{
"docid": "76e75c4549cbaf89796355b299bedfdc",
"text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.",
"title": ""
},
{
"docid": "aead9a7a19551a445584064a669b191a",
"text": "The purpose of this paper is to study the impact of tourism marketing mix and how it affects tourism in Jordan, and to determine which element of the marketing mix has the strongest impact on Jordanian tourism and how it will be used to better satisfy tourists. The paper will focus on foreign tourists coming to Jordan; a field survey will be used by using questionnaires to collect data. Three hundred questionnaires will be collected from actual tourists who visited Jordan, the data will be collected from selected tourism sites like (Petra, Jarash,.... etc.) and classified from one to five stars hotels in Jordan. The questionnaire will be designed in different languages (English, French and Arabic) to meet all tourists from different countries. The study established that from all the marketing mix elements, the researcher studied, product & promotion had the strongest effect on foreign tourist's satisfaction, where price and distribution were also effective significant factors. The research recommends suitable marketing strategies for all elements especially product & promotion.",
"title": ""
},
{
"docid": "32ee8dadf5d8983f40f984f64be37211",
"text": "This paper introduces a model of 'theory of mind', namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a 'game theory of mind'. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a 'stag-hunt'. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution.",
"title": ""
},
{
"docid": "350137bf3c493b23aa6d355df946440f",
"text": "Given the increasing popularity of wearable devices, this paper explores the potential to use wearables for steering and driver tracking. Such capability would enable novel classes of mobile safety applications without relying on information or sensors in the vehicle. In particular, we study how wrist-mounted inertial sensors, such as those in smart watches and fitness trackers, can track steering wheel usage and angle. In particular, tracking steering wheel usage and turning angle provide fundamental techniques to improve driving detection, enhance vehicle motion tracking by mobile devices and help identify unsafe driving. The approach relies on motion features that allow distinguishing steering from other confounding hand movements. Once steering wheel usage is detected, it further uses wrist rotation measurements to infer steering wheel turning angles. Our on-road experiments show that the technique is 99% accurate in detecting steering wheel usage and can estimate turning angles with an average error within 3.4 degrees.",
"title": ""
},
{
"docid": "01e4741bc502dfc3ec6baf227494dc5d",
"text": "In this letter, we present a novel circularly polarized (CP) origami antenna. We fold paper in the form of an origami tetrahedron to serve as the substrate of the antenna. The antenna comprises two triangular monopole elements that are perpendicular to each other. Circular polarization characteristics are achieved by exciting both elements with equal magnitudes and with a phase difference of 90°. In this letter, we explain the origami folding steps in detail. We also verify the proposed concept of the CP origami antenna by performing simulations and measurements using a fabricated prototype. The antenna exhibits a 10-dB impedance bandwidth of 70.2% (2.4–5 GHz), and a 3-dB axial-ratio bandwidth of 8% (3.415–3.7 GHz). The measured left-hand circular polarization gain of the antenna is in the range of 5.2–5.7 dBi for the 3-dB axial-ratio bandwidth.",
"title": ""
},
{
"docid": "4b10fb997b4c38745b030e5f525a99a6",
"text": "Regular machine learning and data mining techniques study the training data for future inferences under a major assumption that the future data are within the same feature space or have the same distribution as the training data. However, due to the limited availability of human labeled training data, training data that stay in the same feature space or have the same distribution as the future data cannot be guaranteed to be sufficient enough to avoid the over-fitting problem. In real-world applications, apart from data in the target domain, related data in a different domain can also be included to expand the availability of our prior knowledge about the target future data. Transfer learning addresses such cross-domain learning problems by extracting useful information from data in a related domain and transferring them for being used in target tasks. In recent years, with transfer learning being applied to visual categorization, some typical problems, e.g., view divergence in action recognition tasks and concept drifting in image classification tasks, can be efficiently solved. In this paper, we survey state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition.",
"title": ""
},
{
"docid": "00103d1ffbd006664de91dd51be13c09",
"text": "Opportunistic Mobile Social Network (MSN) is a kind of Delay Tolerant Network (DTN) in which nodes are mobile with social characteristics. Users in such network carry data, move and forward it to others for information dissemination. To enable efficient data routing in opportunistic MSNs, the social metrics of users, such as mobility pattern, social centrality, community and etc. are leveraged in context of MSNs. In this paper, we investigate the data routing strategies in opportunistic MSNs in the following aspects: (1) the architecture of MSNs and its routing challenges and (2) routing strategies investigation on the basis of different social metrics. We study opportunistic MSN architecture and investigate the social metrics from encounter, social features and social properties, respectively. We show that encounter information is important exemplification of social metrics in opportunistic MSNs. We present other social metrics such as social features and social properties, including social graph properties and community structure. We then elaborate the routing strategies from different perspectives accordingly: encounter-based routing strategies, routing schemes according to social features and routing strategies based on social properties. We discuss the open issues for data routing in opportunistic MSNs, including limitations of routing metrics, collection of social information, social privacy and security, future applications of opportunistic MSNs, and etc. © 2015 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "a2317f8c3852bc79a03480297ac80bc0",
"text": "This panel discussion was a follow-up to the first edition of the ICFHR series, which was promoted as an international conference after 10 intensive and productive editions of the International Workshop on Frontiers in Handwriting Recognition initiated in 1990 in Montreal. It is essential, however, to recall that research on handwritten character recognition began some 40 years ago, and was expanded about 20 years ago to the more general notion of handwriting recognition (no longer constrained to isolated characters) that we work with today. The organizers of ICFHR 2008 wanted to take the opportunity to organize the panel discussion, chaired by M. Cheriet, to highlight progress in the field over the long course of its development. Panelists M. El Yacoubi, H. Fujisawa, D. Lopresti, and G. Lorette talked about the achievements of two decades of handwriting recognition and about its future. Following these outstanding presentations, conference attendees had the opportunity to air their questions and concerns during open and fruitful exchanges. Each section of this summary is drawn from the original presentation of a panelist. In Section 1, Dr. Hiromichi Fujisawa of Hitachi (Japan) discusses the impact of handwriting recognition technologies on the industrial sector. In Section 2, Dr. Mounim A. El Yacoubi of Institut Telecom, T&M SudParis (France) discusses the need to restructure the efforts of the handwriting recognition community. In Section 3, Dr. Guy Lorette of IRISA—Université de Rennes (France) highlights the recent achievements in handwriting recognition and others on the horizon. In Section 4, Dr. Daniel Lopresti of Lehigh University (USA) discusses the future of handwriting recognition research as a “Grand Challenge” problem. Section 5 concludes this summary by highlighting the salient points that will drive research on handwriting recognition in the future.",
"title": ""
},
{
"docid": "41240dccf91b1a3ea3ec9b12f5e451ce",
"text": "This study applied the concept of online consumer social experiences (OCSEs) to reduce online shopping post-payment dissonance (i.e., dissonance occurring between online payment and product receipt). Two types of OCSEs were developed: indirect social experiences (IDSEs) and virtual social experiences (VSEs). Two studies were conducted, in which 447 college students were enrolled. Study 1 compared the effects of OCSEs and non-OCSEs when online shopping post-payment dissonance occurred. The results indicate that providing consumers affected by online shopping post-payment dissonance with OCSEs reduces dissonance and produces higher satisfaction, higher repurchase intention, and lower complaint intention than when no OCSEs are provided. In addition, consumers’ interpersonal trust (IPT) and susceptibility to interpersonal informational influence (SIII) moderated the positive effects of OCSEs. Study 2 compared the effects of IDSEs and VSEs when online shopping post-payment dissonance occurred. The results sugomputing need for control omputer-mediated communication pprehension gest that the effects of IDSEs and VSEs on satisfaction, repurchase intention, and complaint intention are moderated by consumers’ computing need for control (CNC) and computer-mediated communication apprehension (CMCA). The consumers with high CNC and low CMCA preferred VSEs, whereas the consumers with low CNC and high CMCA preferred IDSEs. The effects of VSEs and IDSEs on consumers with high CNC and CMCA and those with low CNC and CMCA were not significantly different. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6bb318e50887e972cbfe52936c82c26f",
"text": "We model the photo cropping problem as a cascade of attention box regression and aesthetic quality classification, based on deep learning. A neural network is designed that has two branches for predicting attention bounding box and analyzing aesthetics, respectively. The predicted attention box is treated as an initial crop window where a set of cropping candidates are generated around it, without missing important information. Then, aesthetics assessment is employed to select the final crop as the one with the best aesthetic quality. With our network, cropping candidates share features within full-image convolutional feature maps, thus avoiding repeated feature computation and leading to higher computation efficiency. Via leveraging rich data for attention prediction and aesthetics assessment, the proposed method produces high-quality cropping results, even with the limited availability of training data for photo cropping. The experimental results demonstrate the competitive results and fast processing speed (5 fps with all steps).",
"title": ""
},
{
"docid": "2e9f2a2e9b74c4634087a664a85fef9f",
"text": "Parkinson’s disease (PD) is the second most common neurodegenerative disease, which is characterized by loss of dopaminergic (DA) neurons in the substantia nigra pars compacta and the formation of Lewy bodies and Lewy neurites in surviving DA neurons in most cases. Although the cause of PD is still unclear, the remarkable advances have been made in understanding the possible causative mechanisms of PD pathogenesis. Numerous studies showed that dysfunction of mitochondria may play key roles in DA neuronal loss. Both genetic and environmental factors that are associated with PD contribute to mitochondrial dysfunction and PD pathogenesis. The induction of PD by neurotoxins that inhibit mitochondrial complex I provides direct evidence linking mitochondrial dysfunction to PD. Decrease of mitochondrial complex I activity is present in PD brain and in neurotoxin- or genetic factor-induced PD cellular and animal models. Moreover, PINK1 and parkin, two autosomal recessive PD gene products, have important roles in mitophagy, a cellular process to clear damaged mitochondria. PINK1 activates parkin to ubiquitinate outer mitochondrial membrane proteins to induce a selective degradation of damaged mitochondria by autophagy. In this review, we summarize the factors associated with PD and recent advances in understanding mitochondrial dysfunction in PD.",
"title": ""
},
{
"docid": "0b17e1cbfa3452ba2ff7c00f4e137aef",
"text": "Brain-computer interfaces (BCIs) promise to provide a novel access channel for assistive technologies, including augmentative and alternative communication (AAC) systems, to people with severe speech and physical impairments (SSPI). Research on the subject has been accelerating significantly in the last decade and the research community took great strides toward making BCI-AAC a practical reality to individuals with SSPI. Nevertheless, the end goal has still not been reached and there is much work to be done to produce real-world-worthy systems that can be comfortably, conveniently, and reliably used by individuals with SSPI with help from their families and care givers who will need to maintain, setup, and debug the systems at home. This paper reviews reports in the BCI field that aim at AAC as the application domain with a consideration on both technical and clinical aspects.",
"title": ""
},
{
"docid": "6bfcf02bea2e2c2ebb387c215487bb78",
"text": "Healthcare is a sector where decisions usually have very high-risk and high-cost associated with them. One bad choice can cost a person's life. With diseases like Swine Flu on the rise, which have symptoms quite similar to common cold, it's very difficult for people to differentiate between medical conditions. We propose a novel method for recognition of diseases and prediction of their cure time based on the symptoms. We do this by assigning different coefficients to each symptom of a disease, and filtering the dataset with the severity score assigned to each symptom by the user. The diseases are identified based on a numerical value calculated in the fashion mentioned above. For predicting the cure time of a disease, we use reinforcement learning. Our algorithm takes into account the similarity between the condition of the current user and other users who have suffered from the same disease, and uses the similarity scores as weights in prediction of cure time. We also predict the current medical condition of user relative to people who have suffered from same disease.",
"title": ""
},
{
"docid": "fd32f2117ae01049314a0c1cfb565724",
"text": "Smart phones, tablets, and the rise of the Internet of Things are driving an insatiable demand for wireless capacity. This demand requires networking and Internet infrastructures to evolve to meet the needs of current and future multimedia applications. Wireless HetNets will play an important role toward the goal of using a diverse spectrum to provide high quality-of-service, especially in indoor environments where most data are consumed. An additional tier in the wireless HetNets concept is envisioned using indoor gigabit small-cells to offer additional wireless capacity where it is needed the most. The use of light as a new mobile access medium is considered promising. In this article, we describe the general characteristics of WiFi and VLC (or LiFi) and demonstrate a practical framework for both technologies to coexist. We explore the existing research activity in this area and articulate current and future research challenges based on our experience in building a proof-of-concept prototype VLC HetNet.",
"title": ""
},
{
"docid": "9d2f569d1105bdac64071541eb01c591",
"text": "1. Outline the principles of the diagnostic tests used to confirm brain death. . 2. The patient has been certified brain dead and her relatives agree with her previously stated wishes to donate her organs for transplantation. Outline the supportive measures which should be instituted to maintain this patient’s organs in an optimal state for subsequent transplantation of the heart, lungs, liver and kidneys.",
"title": ""
},
{
"docid": "61d8761f3c6a8974d0384faf9a084b53",
"text": "With many thyroid nodules being incidentally detected, it is important to identify as many malignant nodules as possible while excluding those that are highly likely to be benign from fine needle aspiration (FNA) biopsies or surgeries. This paper presents a computer-aided diagnosis (CAD) system for classifying thyroid nodules in ultrasound images. We use deep learning approach to extract features from thyroid ultrasound images. Ultrasound images are pre-processed to calibrate their scale and remove the artifacts. A pre-trained GoogLeNet model is then fine-tuned using the pre-processed image samples which leads to superior feature extraction. The extracted features of the thyroid ultrasound images are sent to a Cost-sensitive Random Forest classifier to classify the images into “malignant” and “benign” cases. The experimental results show the proposed fine-tuned GoogLeNet model achieves excellent classification performance, attaining 98.29% classification accuracy, 99.10% sensitivity and 93.90% specificity for the images in an open access database (Pedraza et al. 16), while 96.34% classification accuracy, 86% sensitivity and 99% specificity for the images in our local health region database.",
"title": ""
},
{
"docid": "176c9231f27d22658be5107a74ab2f32",
"text": "The emerging ambient persuasive technology looks very promising for many areas of personal and ubiquitous computing. Persuasive applications aim at changing human attitudes or behavior through the power of software designs. This theory-creating article suggests the concept of a behavior change support system (BCSS), whether web-based, mobile, ubiquitous, or more traditional information system to be treated as the core of research into persuasion, influence, nudge, and coercion. This article provides a foundation for studying BCSSs, in which the key constructs are the O/C matrix and the PSD model. It will (1) introduce the archetypes of behavior change via BCSSs, (2) describe the design process for building persuasive BCSSs, and (3) exemplify research into BCSSs through the domain of health interventions. Recognizing the themes put forward in this article will help leverage the full potential of computing for producing behavioral changes.",
"title": ""
},
{
"docid": "f2014c61ab20bcb3dc586b660116b8d8",
"text": "Detection of stationary foreground objects (i.e., moving objects that remain static throughout several frames) has attracted the attention of many researchers over the last decades and, consequently, many new ideas have been recently proposed, trying to achieve high-quality detections in complex scenarios with the lowest misdetections, while keeping real-time constraints. Most of these strategies are focused on detecting abandoned objects. However, there are some approaches that also allow detecting partially-static foreground objects (e.g. people remaining temporarily static) or stolen objects (i.e., objects removed from the background of the scene). This paper provides a complete survey of the most relevant approaches for detecting all kind of stationary foreground objects. The aim of this survey is not to compare the existing methods, but to provide the information needed to get an idea of the state of the art in this field: kinds of stationary foreground objects, main challenges in the field, main datasets for testing the detection of stationary foreground, main stages in the existing approaches and algorithms typically used in such stages.",
"title": ""
}
] |
scidocsrr
|
a6a1ce689aa4fb33a142b1defb92072a
|
Recurrent Memory Networks for Language Modeling
|
[
{
"docid": "6dfc558d273ec99ffa7dc638912d272c",
"text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.",
"title": ""
},
{
"docid": "66b2f59c4f46b917ff6755e2b2fbb39c",
"text": "Overview • Learning flexible word representations is the first step towards learning semantics. •The best current approach to learning word embeddings involves training a neural language model to predict each word in a sentence from its neighbours. – Need to use a lot of data and high-dimensional embeddings to achieve competitive performance. – More scalable methods translate to better results. •We propose a simple and scalable approach to learning word embeddings based on training lightweight models with noise-contrastive estimation. – It is simpler, faster, and produces better results than the current state-of-the art method.",
"title": ""
}
] |
[
{
"docid": "30a617e3f7e492ba840dfbead690ae39",
"text": "Information systems professionals must pay attention to online customer retention. Drawing on the relationship marketing literature, we formulated and tested a model to explain B2C user repurchase intention from the perspective of relationship quality. The model was empirically tested through a survey conducted in Northern Ireland. Results showed that online relationship quality and perceived website usability positively impacted customer repurchase intention. Moreover, online relationship quality was positively influenced by perceived vendor expertise in order fulfillment, perceived vendor reputation, and perceived website usability, whereas distrust in vendor behavior negatively influenced online relationship quality. Implications of these findings are discussed. 2011 Elsevier B.V. All rights reserved. § This work was partially supported by Strategic Research Grant at City University of Hong Kong, China (No. CityU 7002521), and the National Nature Science Foundation of China (No. 70773008). * Corresponding author at: P7722, City University of Hong Kong, Hong Kong, China. Tel.: +852 27887492; fax: +852 34420370. E-mail address: [email protected] (Y. Fang).",
"title": ""
},
{
"docid": "795e9da03d2b2d6e66cf887977fb24e9",
"text": "Researchers working on the planning, scheduling, and execution of scientific workflows need access to a wide variety of scientific workflows to evaluate the performance of their implementations. This paper provides a characterization of workflows from six diverse scientific applications, including astronomy, bioinformatics, earthquake science, and gravitational-wave physics. The characterization is based on novel workflow profiling tools that provide detailed information about the various computational tasks that are present in the workflow. This information includes I/O, memory and computational characteristics. Although the workflows are diverse, there is evidence that each workflow has a job type that consumes the most amount of runtime. The study also uncovered inefficiency in a workflow component implementation, where the component was re-reading the same data multiple times. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f53be608e9a27d5de0a87c03b953ca28",
"text": "In this work, we present and analyze an image denoising method, the NL-means algorithm, based on a non local averaging of all pixels in the image. We also introduce the concept of method noise, that is, the difference between the original (always slightly noisy) digital image and its denoised version. Finally, we present some experiences comparing the NL-means results with some classical denoising methods.",
"title": ""
},
{
"docid": "3f3a017d93588f19eb59a93ccd587902",
"text": "n this work we propose a novel Hough voting approach for the detection of free-form shapes in a 3D space, to be used for object recognition tasks in 3D scenes with a significant degree of occlusion and clutter. The proposed method relies on matching 3D features to accumulate evidence of the presence of the objects being sought in a 3D Hough space. We validate our proposal by presenting a quantitative experimental comparison with state-of-the-art methods as well as by showing how our method enables 3D object recognition from real-time stereo data.",
"title": ""
},
{
"docid": "56c0f28b05274d723f5170d3faf9ceb6",
"text": "We propose a flexible framework for profit-seeking market making by combining cost function based automated market makers with bandit learning algorithms. The key idea is to consider each parametrisation of the cost function as a bandit arm, and the minimum expected profits from trades executed during a period as the rewards. This allows for the creation of market makers that can adjust liquidity and bid-asks spreads dynamically to maximise profits.",
"title": ""
},
{
"docid": "9c98e4d100c6bc77d18f26234a5a4d59",
"text": "The analysis of human motion as a clinical tool can bring many benefits such as the early detection of disease and the monitoring of recovery, so in turn helping people to lead independent lives. However, it is currently under used. Developments in depth cameras, such as Kinect, have opened up the use of motion analysis in settings such as GP surgeries, care homes and private homes. To provide an insight into the use of Kinect in the healthcare domain, we present a review of the current state of the art. We then propose a method that can represent human motions from time-series data of arbitrary length, as a single vector. Finally, we demonstrate the utility of this method by extracting a set of clinically significant features and using them to detect the age related changes in the motions of a set of 54 individuals, with a high degree of certainty (F1-score between 0.9–1.0). Indicating its potential application in the detection of a range of age-related motion impairments.",
"title": ""
},
{
"docid": "4ef6a80f243305b4c26d12684118cc2d",
"text": "A wide variety of neural-network architectures have been proposed for the task of Chinese word segmentation. Surprisingly, we find that a bidirectional LSTM model, when combined with standard deep learning techniques and best practices, can achieve better accuracy on many of the popular datasets as compared to models based on more complex neuralnetwork architectures. Furthermore, our error analysis shows that out-of-vocabulary words remain challenging for neural-network models, and many of the remaining errors are unlikely to be fixed through architecture changes. Instead, more effort should be made on exploring resources for further improvement.",
"title": ""
},
{
"docid": "c7d23af5ad79d9863e83617cf8bbd1eb",
"text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.",
"title": ""
},
{
"docid": "75ed4cabbb53d4c75fda3a291ea0ab67",
"text": "Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and future-proof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols.",
"title": ""
},
{
"docid": "e8d6dcd531a9a7552aeabb9ffa862f3a",
"text": "The use of drones in infrastructure monitoring aims at decreasing the human effort and in achieving consistency. Accurate aerial image analysis is the key block to achieve the same. Reliable detection and integrity checking of power line conductors in a diverse background are the most challenging in drone based automatic infrastructure monitoring. Most techniques in literature use first principle approach that tries to represent the image as features of interest. This paper proposes a machine learning approach for power line detection. A new deep learning architecture is proposed with very good results and is compared with GoogleNet pre-trained model. The proposed architecture uses Histogram of Gradient features as the input instead of the image itself to ensure capture of accurate line features. The system is tested on aerial image collected using drone. A healthy F-score of 84.6% is obtained using the proposed architecture as against 81% using GoogleNet model.",
"title": ""
},
{
"docid": "8bae7b67b58d8875838add5cc83b1bcb",
"text": "In Valve's Source graphics engine, bump mapping is combined with precomputed radiosity lighting to provide realistic surface illumination. When bump map data is derived from geometric descriptions of surface detail (such as height maps), only the lighting effects caused by the surface orientation are preserved. The significant lighting cues due to lighting occlusion by surface details are lost. While it is common to use another texture channel to hold an \"ambient occlusion\" field, this only provides a darkening effect which is independent of the direction from which the surface is being lit and requires an auxiliary channel of data.\n In this chapter, we present a modification to the Radiosity Normal Mapping system that we have described in this course in the past. This modification provides a directional occlusion function to the bump maps, which requires no additional texture memory and is faster than our previous non-shadowing solution.",
"title": ""
},
{
"docid": "23832f031f7c700f741843e54ff81b4e",
"text": "Data Mining in medicine is an emerging field of great importance to provide a prognosis and deeper understanding of disease classification, specifically in Mental Health areas. The main objective of this paper is to present a review of the existing research works in the literature, referring to the techniques and algorithms of Data Mining in Mental Health, specifically in the most prevalent diseases such as: Dementia, Alzheimer, Schizophrenia and Depression. Academic databases that were used to perform the searches are Google Scholar, IEEE Xplore, PubMed, Science Direct, Scopus and Web of Science, taking into account as date of publication the last 10 years, from 2008 to the present. Several search criteria were established such as ‘techniques’ AND ‘Data Mining’ AND ‘Mental Health’, ‘algorithms’ AND ‘Data Mining’ AND ‘dementia’ AND ‘schizophrenia’ AND ‘depression’, etc. selecting the papers of greatest interest. A total of 211 articles were found related to techniques and algorithms of Data Mining applied to the main Mental Health diseases. 72 articles have been identified as relevant works of which 32% are Alzheimer’s, 22% dementia, 24% depression, 14% schizophrenia and 8% bipolar disorders. Many of the papers show the prediction of risk factors in these diseases. From the review of the research articles analyzed, it can be said that use of Data Mining techniques applied to diseases such as dementia, schizophrenia, depression, etc. can be of great help to the clinical decision, diagnosis prediction and improve the patient’s quality of life.",
"title": ""
},
{
"docid": "0bef30a278f6acff7ee4a6d9f16dce66",
"text": "Graph classification is an important data mining task, and various graph kernel methods have been proposed recently for this task. These methods have proven to be effective, but they tend to have high computational overhead. In this paper, we propose an alternative approach to graph classification that is based on feature-vectors constructed from different global topological attributes, as well as global label features. The main idea here is that the graphs from the same class should have similar topological and label attributes. Our method is simple and easy to implement, and via a detailed comparison on real benchmark datasets, we show that our topological and label feature-based approach delivers better or competitive classification accuracy, and is also substantially faster than other graph kernels. It is the most effective method for large unlabeled graphs.",
"title": ""
},
{
"docid": "30f12cbec518ef3b58f8d19d94780169",
"text": "AMNESIA is a tool that detects and prevents SQL injection attacks by combining static analysis and runtime monitoring. Empirical evaluation has shown that AMNESIA is both effective and efficient against SQL injection.",
"title": ""
},
{
"docid": "f0ea768c020a99ac3ed144b76893dbd9",
"text": "This paper focuses on tracking dynamic targets using a low cost, commercially available drone. The approach presented utilizes a computationally simple potential field controller expanded to operate not only on relative positions, but also relative velocities. A brief background on potential field methods is given, and the design and implementation of the proposed controller is presented. Experimental results using an external motion capture system for localization demonstrate the ability of the drone to track a dynamic target in real time as well as avoid obstacles in its way.",
"title": ""
},
{
"docid": "46c8277006b82854386af4833d545dd5",
"text": "A valuable step towards news veracity assessment is to understand stance from different information sources, and the process is known as the stance detection. Specifically, the stance detection is to detect four kinds of stances (“agree”, “disagree”, “discuss” and “unrelated”) of the news towards a claim. Existing methods tried to tackle the stance detection problem by classification-based algorithms. However, classification-based algorithms make a strong assumption that there is clear distinction between any two stances, which may not be held in the context of stance detection. Accordingly, we frame the detection problem as a ranking problem and propose a rankingbased method to improve detection performance. Compared with the classification-based methods, the ranking-based method compare the true stance and false stances and maximize the difference between them. Experimental results demonstrate the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "d2ed4a8558c9ec9f794abd3cc22678e3",
"text": "Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (Axelrod et al., 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce dynamic data selection for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call gradual fine-tuning, with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline.",
"title": ""
},
{
"docid": "2aa324628b082f1fd6d1e1e0221d1bad",
"text": "Recent behavioral investigations have revealed that autistics perform more proficiently on Raven's Standard Progressive Matrices (RSPM) than would be predicted by their Wechsler intelligence scores. A widely-used test of fluid reasoning and intelligence, the RSPM assays abilities to flexibly infer rules, manage goal hierarchies, and perform high-level abstractions. The neural substrates for these abilities are known to encompass a large frontoparietal network, with different processing models placing variable emphasis on the specific roles of the prefrontal or posterior regions. We used functional magnetic resonance imaging to explore the neural bases of autistics' RSPM problem solving. Fifteen autistic and eighteen non-autistic participants, matched on age, sex, manual preference and Wechsler IQ, completed 60 self-paced randomly-ordered RSPM items along with a visually similar 60-item pattern matching comparison task. Accuracy and response times did not differ between groups in the pattern matching task. In the RSPM task, autistics performed with similar accuracy, but with shorter response times, compared to their non-autistic controls. In both the entire sample and a subsample of participants additionally matched on RSPM performance to control for potential response time confounds, neural activity was similar in both groups for the pattern matching task. However, for the RSPM task, autistics displayed relatively increased task-related activity in extrastriate areas (BA18), and decreased activity in the lateral prefrontal cortex (BA9) and the medial posterior parietal cortex (BA7). Visual processing mechanisms may therefore play a more prominent role in reasoning in autistics.",
"title": ""
},
{
"docid": "fe38b44457f89bcb63aabe65babccd03",
"text": "Single sample face recognition have become an important problem because of the limitations on the availability of gallery images. In many real-world applications such as passport or driver license identification, there is only a single facial image per subject available. The variations between the single gallery face image and the probe face images, captured in unconstrained environments, make the single sample face recognition even more difficult. In this paper, we present a fully automatic face recognition system robust to most common face variations in unconstrained environments. Our proposed system is capable of recognizing faces from non-frontal views and under different illumination conditions using only a single gallery sample for each subject. It normalizes the face images for both in-plane and out-of-plane pose variations using an enhanced technique based on active appearance models (AAMs). We improve the performance of AAM fitting, not only by training it with in-the-wild images and using a powerful optimization technique, but also by initializing the AAM with estimates of the locations of the facial landmarks obtained by a method based on flexible mixture of parts. The proposed initialization technique results in significant improvement of AAM fitting to non-frontal poses and makes the normalization process robust, fast and reliable. Owing to the proper alignment of the face images, made possible by this approach, we can use local feature descriptors, such as Histograms of Oriented Gradients (HOG), for matching. The use of HOG features makes the system robust against illumination variations. In order to improve the discriminating information content of the feature vectors, we also extract Gabor features from the normalized face images and fuse them with HOG features using Canonical Correlation Analysis (CCA). Experimental results performed on various databases outperform the state-of-the-art methods and show the effectiveness of our proposed method in normalization and recognition of face images obtained in unconstrained environments.",
"title": ""
},
{
"docid": "3a69d6ef79482d26aee487a964ff797f",
"text": "The FPGA compilation process (synthesis, map, placement, routing) is a time-consuming process that limits designer productivity. Compilation time can be reduced by using pre-compiled circuit blocks (hard macros). Hard macros consist of previously synthesized, mapped, placed and routed circuitry that can be relatively placed with short tool runtimes and that make it possible to reuse previous computational effort. Two experiments were performed to demonstrate feasibility that hard macros can reduce compilation time. These experiments demonstrated that an augmented Xilinx flow designed specifically to support hard macros can reduce overall compilation time by 3x. Though the process of incorporating hard macros in designs is currently manual and error-prone, it can be automated to create compilation flows with much lower compilation time.",
"title": ""
}
] |
scidocsrr
|
17c3a59dccb132b10e8ed771f93c7661
|
Concept-based Short Text Classification and Ranking
|
[
{
"docid": "57457909ea5fbee78eccc36c02464942",
"text": "Knowledge is indispensable to understanding. The ongoing information explosion highlights the need to enable machines to better understand electronic text in human language. Much work has been devoted to creating universal ontologies or taxonomies for this purpose. However, none of the existing ontologies has the needed depth and breadth for universal understanding. In this paper, we present a universal, probabilistic taxonomy that is more comprehensive than any existing ones. It contains 2.7 million concepts harnessed automatically from a corpus of 1.68 billion web pages. Unlike traditional taxonomies that treat knowledge as black and white, it uses probabilities to model inconsistent, ambiguous and uncertain information it contains. We present details of how the taxonomy is constructed, its probabilistic modeling, and its potential applications in text understanding.",
"title": ""
},
{
"docid": "cbac071c932c73813630fd7384e4f98c",
"text": "In this paper we propose a method that, given a query submitte d to a search engine, suggests a list of related queries. The rela t d queries are based in previously issued queries, and can be issued by the user to the search engine to tune or redirect the search process. The method proposed i s based on a query clustering process in which groups of semantically similar queries are identified. The clustering process uses the content of historical prefe renc s of users registered in the query log of the search engine. The method not onl y discovers the related queries, but also ranks them according to a relevanc criterion. Finally, we show with experiments over the query log of a search engine the ffectiveness of the method.",
"title": ""
},
{
"docid": "1ec9b98f0f7509088e7af987af2f51a2",
"text": "In this paper, we describe an automated learning approach to text categorization based on perception learning and a new feature selection metric, called correlation coefficient. Our approach has been teated on the standard Reuters text categorization collection. Empirical results indicate that our approach outperforms the best published results on this % uters collection. In particular, our new feature selection method yields comiderable improvement. We also investigate the usability of our automated hxu-n~ approach by actually developing a system that categorizes texts into a treeof categories. We compare tbe accuracy of our learning approach to a rrddmsed, expert system ap preach that uses a text categorization shell built by Cams gie Group. Although our automated learning approach still gives a lower accuracy, by appropriately inmrporating a set of manually chosen worda to use as f~ures, the combined, semi-automated approach yields accuracy close to the * baaed approach.",
"title": ""
},
{
"docid": "b1a08b10ea79a250a62030a2987b67a6",
"text": "Most text mining tasks, including clustering and topic detection, are based on statistical methods that treat text as bags of words. Semantics in the text is largely ignored in the mining process, and mining results often have low interpretability. One particular challenge faced by such approaches lies in short text understanding, as short texts lack enough content from which statistical conclusions can be drawn easily. In this paper, we improve text understanding by using a probabilistic knowledgebase that is as rich as our mental world in terms of the concepts (of worldly facts) it contains. We then develop a Bayesian inference mechanism to conceptualize words and short text. We conducted comprehensive experiments on conceptualizing textual terms, and clustering short pieces of text such as Twitter messages. Compared to purely statistical methods such as latent semantic topic modeling or methods that use existing knowledgebases (e.g., WordNet, Freebase and Wikipedia), our approach brings significant improvements in short text understanding as reflected by the clustering accuracy.",
"title": ""
},
{
"docid": "ef08ef786fd759b33a7d323c69be19db",
"text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.",
"title": ""
}
] |
[
{
"docid": "c848f8194856335a19bc195a79942d48",
"text": "Managerial myopia in identifying competitive threats is a well-recognized phenomenon (Levitt, 1960; Zajac and Bazerman, 1991). Identifying such threats is particularly problematic, since they may arise from substitutability on the supply side as well as on the demand side. Managers who focus only on the product market arena in scanning their competitive environment may fail to notice threats that are developing due to the resources and latent capabilities of indirect or potential competitors. This paper brings together insights from the fields of strategic management and marketing to develop a simple but powerful set of tools for helping managers overcome this common problem. We present a two-stage framework for competitor identification and analysis that brings into consideration a broad range of competitors, including potential competitors, substitutors, and indirect competitors. Specifically we draw from Peteraf and Bergen’s (2001) framework for competitor identification to develop a hierarchy of competitor awareness. That is used, in combination with resource equivalence, to generate hypotheses on competitive analysis. This framework not only extends the ken of managers, but also facilitates an assessment of the strategic opportunities and threats that various competitors represent and allows managers to assess their significance in relative terms. Copyright # 2002 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "299deaffdd1a494fc754b9e940ad7f81",
"text": "In this work, we study an important problem: learning programs from input-output examples. We propose a novel method to learn a neural program operating a domain-specific non-differentiable machine, and demonstrate that this method can be applied to learn programs that are significantly more complex than the ones synthesized before: programming language parsers from input-output pairs without knowing the underlying grammar. The main challenge is to train the neural program without supervision on execution traces. To tackle it, we propose: (1) LL machines and neural programs operating them to effectively regularize the space of the learned programs; and (2) a two-phase reinforcement learning-based search technique to train the model. Our evaluation demonstrates that our approach can successfully learn to parse programs in both an imperative language and a functional language, and achieve 100% test accuracy, while existing approaches’ accuracies are almost 0%. This is the first successful demonstration of applying reinforcement learning to train a neural program operating a non-differentiable machine that can fully generalize to test sets on a non-trivial task.",
"title": ""
},
{
"docid": "2d0b0511f8f2ce41b7d2d60d57bc7236",
"text": "There is broad consensus that good outcome measures are needed to distinguish interventions that are effective from those that are not. This task requires standardized, patient-centered measures that can be administered at a low cost. We developed a questionnaire to assess short- and long-term patient-relevant outcomes following knee injury, based on the WOMAC Osteoarthritis Index, a literature review, an expert panel, and a pilot study. The Knee injury and Osteoarthritis Outcome Score (KOOS) is self-administered and assesses five outcomes: pain, symptoms, activities of daily living, sport and recreation function, and knee-related quality of life. In this clinical study, the KOOS proved reliable, responsive to surgery and physical therapy, and valid for patients undergoing anterior cruciate ligament reconstruction. The KOOS meets basic criteria of outcome measures and can be used to evaluate the course of knee injury and treatment outcome.",
"title": ""
},
{
"docid": "28ccab4b6b7c9c70bc07e4b3219d99d4",
"text": "The Wireless Networking After Next (WNaN) radio is a handheld-sized radio that delivers unicast, multicast, and disruption-tolerant traffic in networks of hundreds of radios. This paper addresses scalability of the network from the routing control traffic point of view. Starting from a basic version of an existing mobile ad-hoc network (MANET) proactive link-state routing protocol, we describe the enhancements that were necessary to provide good performance in these conditions. We focus on techniques to reduce control traffic while maintaining route integrity. We present simulation results from 250-node mobile networks demonstrating the effectiveness of the routing mechanisms. Any MANET with design parameters and constraints similar to the WNaN radio will benefit from these improvements.",
"title": ""
},
{
"docid": "9775092feda3a71c1563475bae464541",
"text": "Open Shortest Path First (OSPF) is the most commonly used intra-domain internet routing protocol. Traffic flow is routed along shortest paths, sptitting flow at nodes where several outgoing tinks are on shortest paths to the destination. The weights of the tinks, and thereby the shortest path routes, can be changed by the network operator. The weights could be set proportional to their physical distances, but often the main goal is to avoid congestion, i.e. overloading of links, and the standard heuristic rec. ommended by Cisco is to make the weight of a link inversely proportional to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimiz@ the weight settings for a given set of demands is NP-hard, so we resorted to a local search heuristic. Surprisingly it turned out that for the proposed AT&T WorldNet backbone, we found weight settiis that performed within a few percent from that of the optimal general routing where the flow for each demand is optimalty distributed over all paths between source and destination. This contrasts the common belief that OSPF routing leads to congestion and it shows that for the network and demand matrix studied we cannot get a substantially better load balancing by switching to the proposed more flexible Multi-protocol Label Switching (MPLS) technologies. Our techniques were atso tested on synthetic internetworks, based on a model of Zegura et al. (INFOCOM’96), for which we dld not always get quite as close to the optimal general routing. However, we compared witIs standard heuristics, such as weights inversely proportional to the capac.. ity or proportioml to the physical distances, and found that, for the same network and capacities, we could support a 50 Yo-1 10% increase in the demands. Our assumed demand matrix can also be seen as modeling service level agreements (SLAS) with customers, with demands representing guarantees of throughput for virtnal leased lines. Keywords— OSPF, MPLS, traffic engineering, local search, hashing ta. bles, dynamic shortest paths, mntti-cosnmodity network flows.",
"title": ""
},
{
"docid": "6eb7bb6f623475f7ca92025fd00dbc27",
"text": "Support vector machines (SVMs) have been recognized as one o f th most successful classification methods for many applications including text classific ation. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational com plexity is an essential issue to efficiently handle a large number of terms in practical applicat ions of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dim nsion of the document vectors dramatically. We also introduce decision functions for the centroid-based classification algorithm and support vector classifiers to handle the classification p r blem where a document may belong to multiple classes. Our substantial experimental results sh ow t at with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accu ra y of text classification even when the dimension of the input space is significantly reduced.",
"title": ""
},
{
"docid": "69d296d1302d9e0acd7fb576f551118d",
"text": "Event detection is a research area that attracted attention during the last years due to the widespread availability of social media data. The problem of event detection has been examined in multiple social media sources like Twitter, Flickr, YouTube and Facebook. The task comprises many challenges including the processing of large volumes of data and high levels of noise. In this article, we present a wide range of event detection algorithms, architectures and evaluation methodologies. In addition, we extensively discuss on available datasets, potential applications and open research issues. The main objective is to provide a compact representation of the recent developments in the field and aid the reader in understanding the main challenges tackled so far as well as identifying interesting future research directions.",
"title": ""
},
{
"docid": "d253029f47fe3afb6465a71e966fdbd5",
"text": "With the development of the social economy, more and more appliances have been presented in a house. It comes out a problem that how to manage and control these increasing various appliances efficiently and conveniently so as to achieve more comfortable, security and healthy space at home. In this paper, a smart control system base on the technologies of internet of things has been proposed to solve the above problem. The smart home control system uses a smart central controller to set up a radio frequency 433 MHz wireless sensor and actuator network (WSAN). A series of control modules, such as switch modules, radio frequency control modules, have been developed in the WSAN to control directly all kinds of home appliances. Application servers, client computers, tablets or smart phones can communicate with the smart central controller through a wireless router via a Wi-Fi interface. Since it has WSAN as the lower control layer, a appliance can be added into or withdrawn from the control system very easily. The smart control system embraces the functions of appliance monitor, control and management, home security, energy statistics and analysis.",
"title": ""
},
{
"docid": "58a9ef3dea7788c66942d7cb11dcd8fd",
"text": "Frontalis suspension is a commonly used surgery that is indicated in patients with blepharoptosis and poor levator muscle function. The surgery is based on connecting the tarsal plate to the eyebrow with various sling materials. Although fascia lata is most commonly used due to its long-lasting effect and low rate of complications, it has several limitations such as difficulty of harvesting, insufficient amounts in small children, and postoperative donor-site complications. Other sling materials have overcome these limitations, but on the other hand, have been reported to be associated with other complications. In this review we focus on the different techniques and materials which are used in frontalis suspension surgeries, as well as the advantage and disadvantage of these techniques.",
"title": ""
},
{
"docid": "cce477dd5efd3ecbabc57dfb237b72c9",
"text": "In this paper we present BabelDomains, a unified resource which provides lexical items with information about domains of knowledge. We propose an automatic method that uses knowledge from various lexical resources, exploiting both distributional and graph-based clues, to accurately propagate domain information. We evaluate our methodology intrinsically on two lexical resources (WordNet and BabelNet), achieving a precision over 80% in both cases. Finally, we show the potential of BabelDomains in a supervised learning setting, clustering training data by domain for hypernym discovery.",
"title": ""
},
{
"docid": "427b3cae516025381086021bc66f834e",
"text": "PhishGuru is an embedded training system that teaches users to avoid falling for phishing attacks by delivering a training message when the user clicks on the URL in a simulated phishing email. In previous lab and real-world experiments, we validated the effectiveness of this approach. Here, we extend our previous work with a 515-participant, real-world study in which we focus on long-term retention and the effect of two training messages. We also investigate demographic factors that influence training and general phishing susceptibility. Results of this study show that (1) users trained with PhishGuru retain knowledge even after 28 days; (2) adding a second training message to reinforce the original training decreases the likelihood of people giving information to phishing websites; and (3) training does not decrease users' willingness to click on links in legitimate messages. We found no significant difference between males and females in the tendency to fall for phishing emails both before and after the training. We found that participants in the 18--25 age group were consistently more vulnerable to phishing attacks on all days of the study than older participants. Finally, our exit survey results indicate that most participants enjoyed receiving training during their normal use of email.",
"title": ""
},
{
"docid": "5bf172cfc7d7de0c82707889cf722ab2",
"text": "The concept of a decentralized ledger usually implies that each node of a blockchain network stores the entire blockchain. However, in the case of popular blockchains, which each weigh several hundreds of GB, the large amount of data to be stored can incite new or low-capacity nodes to run lightweight clients. Such nodes do not participate to the global storage effort and can result in a centralization of the blockchain by very few nodes, which is contrary to the basic concepts of a blockchain. To avoid this problem, we propose new low storage nodes that store a reduced amount of data generated from the blockchain by using erasure codes. The properties of this technique ensure that any block of the chain can be easily rebuilt from a small number of such nodes. This system should encourage low storage nodes to contribute to the storage of the blockchain and to maintain decentralization despite of a globally increasing size of the blockchain. This system paves the way to new types of blockchains which would only be managed by low capacity nodes.",
"title": ""
},
{
"docid": "4f5f128195592fe881269f54fd3424e7",
"text": "In this research, a new method is proposed for the optimization of warship spare parts stock with genetic algorithm. Warships should fulfill her duties in all circumstances. Considering the warships have more than a hundred thousand unique parts, it is a very hard problem to decide which spare parts should be stocked at warehouse aiming to use in case of failure. In this study, genetic algorithm that is a heuristic optimization method is used to solve this problem. The demand quantity, the criticality and the cost of parts is used for optimization. A genetic algorithm with very long chromosome is used, i.e. over 1000 genes in one chromosome. The outputs of the method is analyzed and compared with the Price Sensitive 0.5 FLSIP+ model, which is widely used over navies, and came to a conclusion that the proposed method is better.",
"title": ""
},
{
"docid": "00f106ff157e515ed8fde53fdaf1491e",
"text": "In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural network or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.",
"title": ""
},
{
"docid": "b519ac8572520bfcc38b8974119d9eec",
"text": "Nastaliq is a calligraphic, beautiful and more aesthetic style of writing Urdu, the national language of Pakistan, also used to read and write in India and other countries of the region.\n OCRs developed for many world languages are already under efficient use but none exist for Nastaliq -- a calligraphic adaptation of the Arabic scrip which is inherently cursive in nature.\n In Nastaliq, word and character overlapping makes optical recognition more complex.\n This paper presents the ongoing research on Nastaliq Optical Character Recognition (NOCR). In this research, we have proposed a novel segmentation-free technique for the design and implementation of a Nastaliq OCR based on cross-correlation.",
"title": ""
},
{
"docid": "20ecae219ecf21429fb7c2697339fe50",
"text": "Massively multiplayer game holds a huge market in the digital entertainment industry. Companies invest heavily in the game and graphics development since a successful online game can attract million of users, and this translates to a huge investment payoff. However, multiplayer online game is also subjected to various forms of hacks and cheats. Hackers can alter the graphic rendering to reveal information otherwise be hidden in a normal game, or cheaters can use software robot to play the game automatically and gain an unfair advantage. Currently, some popular online games release software patches or incorporate anti-cheating software to detect known cheats. This not only creates deployment difficulty but new cheats will still be able to breach the normal game logic until software patches are available. Moreover, the anti-cheating software themselves are also vulnerable to hacks. In this paper, we propose a scalable and efficient method to detect whether a player is cheating or not. The methodology is based on the dynamic Bayesian network approach. The detection framework relies solely on the game states and runs in the game server only. Therefore it is invulnerable to hacks and it is a much more deployable solution. To demonstrate the effectiveness of the propose method, we implement a prototype multiplayer game system and to detect whether a player is using the “aiming robot” for cheating or not. Experiments show that not only we can effectively detect cheaters, but the false positive rate is extremely low. We believe the proposed methodology and the prototype system provide a first step toward a systematic study of cheating detection and security research in the area of online multiplayer games.",
"title": ""
},
{
"docid": "5b15a833cb6b4d9dd56dea59edb02cf8",
"text": "BACKGROUND\nQuantification of the biomechanical properties of each individual medial patellar ligament will facilitate an understanding of injury patterns and enhance anatomic reconstruction techniques by improving the selection of grafts possessing appropriate biomechanical properties for each ligament.\n\n\nPURPOSE\nTo determine the ultimate failure load, stiffness, and mechanism of failure of the medial patellofemoral ligament (MPFL), medial patellotibial ligament (MPTL), and medial patellomeniscal ligament (MPML) to assist with selection of graft tissue for anatomic reconstructions.\n\n\nSTUDY DESIGN\nDescriptive laboratory study.\n\n\nMETHODS\nTwenty-two nonpaired, fresh-frozen cadaveric knees were dissected free of all soft tissue structures except for the MPFL, MPTL, and MPML. Two specimens were ultimately excluded because their medial structure fibers were lacerated during dissection. The patella was obliquely cut to test the MPFL and the MPTL-MPML complex separately. To ensure that the common patellar insertion of the MPTL and MPML was not compromised during testing, only one each of the MPML and MPTL were tested per specimen (n = 10 each). Specimens were secured in a dynamic tensile testing machine, and the ultimate load, stiffness, and mechanism of failure of each ligament (MPFL = 20, MPML = 10, and MPTL = 10) were recorded.\n\n\nRESULTS\nThe mean ± SD ultimate load of the MPFL (178 ± 46 N) was not significantly greater than that of the MPTL (147 ± 80 N; P = .706) but was significantly greater than that of the MPML (105 ± 62 N; P = .001). The mean ultimate load of the MPTL was not significantly different from that of the MPML ( P = .210). Of the 20 MPFLs tested, 16 failed by midsubstance rupture and 4 by bony avulsion on the femur. Of the 10 MPTLs tested, 9 failed by midsubstance rupture and 1 by bony avulsion on the patella. Finally, of the 10 MPMLs tested, all 10 failed by midsubstance rupture. No significant difference was found in mean stiffness between the MPFL (23 ± 6 N/mm2) and the MPTL (31 ± 21 N/mm2; P = .169), but a significant difference was found between the MPFL and the MPML (14 ± 8 N/mm2; P = .003) and between the MPTL and MPML ( P = .028).\n\n\nCONCLUSION\nThe MPFL and MPTL had comparable ultimate loads and stiffness, while the MPML had lower failure loads and stiffness. Midsubstance failure was the most common type of failure; therefore, reconstruction grafts should meet or exceed the values reported herein.\n\n\nCLINICAL RELEVANCE\nFor an anatomic medial-sided knee reconstruction, the individual biomechanical contributions of the medial patellar ligamentous structures (MPFL, MPTL, and MPML) need to be characterized to facilitate an optimal reconstruction design.",
"title": ""
},
{
"docid": "ca599d7b637d25835d881c6803a9e064",
"text": "Accumulating research shows that prenatal exposure to maternal stress increases the risk for behavioral and mental health problems later in life. This review systematically analyzes the available human studies to identify harmful stressors, vulnerable periods during pregnancy, specificities in the outcome and biological correlates of the relation between maternal stress and offspring outcome. Effects of maternal stress on offspring neurodevelopment, cognitive development, negative affectivity, difficult temperament and psychiatric disorders are shown in numerous epidemiological and case-control studies. Offspring of both sexes are susceptible to prenatal stress but effects differ. There is not any specific vulnerable period of gestation; prenatal stress effects vary for different gestational ages possibly depending on the developmental stage of specific brain areas and circuits, stress system and immune system. Biological correlates in the prenatally stressed offspring are: aberrations in neurodevelopment, neurocognitive function, cerebral processing, functional and structural brain connectivity involving amygdalae and (pre)frontal cortex, changes in hypothalamo-pituitary-adrenal (HPA)-axis and autonomous nervous system.",
"title": ""
},
{
"docid": "c7d11801e1c3a6bd7e32b3ab7ea9767a",
"text": "With the increasing threat of sophisticated attacks on critical infrastructures, it is vital that forensic investigations take place immediately following a security incident. This paper presents an existing SCADA forensic process model and proposes a structured SCADA forensic process model to carry out a forensic investigations. A discussion on the limitations of using traditional forensic investigative processes and the challenges facing forensic investigators. Furthermore, flaws of existing research into providing forensic capability for SCADA systems are examined in detail. The study concludes with an experimentation of a proposed SCADA forensic capability architecture on the Siemens S7 PLC. Modifications to the memory addresses are monitored and recorded for forensic evidence. The collected forensic evidence will be used to aid the reconstruction of a timeline of events, in addition to other collected forensic evidence such as network packet captures.",
"title": ""
},
{
"docid": "16bfea9d5a3f736fe39fdd1f6725b642",
"text": "Tilting and motion are widely used as interaction modalities in smart objects such as wearables and smart phones (e.g., to detect posture or shaking). They are often sensed with accelerometers. In this paper, we propose to embed liquids into 3D printed objects while printing to sense various tilting and motion interactions via capacitive sensing. This method reduces the assembly effort after printing and is a low-cost and easy-to-apply way of extending the input capabilities of 3D printed objects. We contribute two liquid sensing patterns and a practical printing process using a standard dual-extrusion 3D printer and commercially available materials. We validate the method by a series of evaluations and provide a set of interactive example applications.",
"title": ""
}
] |
scidocsrr
|
84346fc7e5952e73411819430795a45b
|
Dynamics of Platform Competition: Exploring the Role of Installed Base, Platform Quality and Consumer Expectations
|
[
{
"docid": "4bfb389e1ae2433f797458ff3fe89807",
"text": "Many if not most markets with network externalities are two-sided. To succeed, platforms in industries such as software, portals and media, payment systems and the Internet, must “get both sides of the market on board ”. Accordingly, platforms devote much attention to their business model, that is to how they court each side while making money overall. The paper builds a model of platform competition with two-sided markets. It unveils the determinants of price allocation and enduser surplus for different governance structures (profit-maximizing platforms and not-for-profit joint undertakings), and compares the outcomes with those under an integrated monopolist and a Ramsey planner.",
"title": ""
}
] |
[
{
"docid": "45a24b15455b98277e0ee49b31b234d0",
"text": "Breakthroughs in genetics and molecular biology in the 1970s and 1980s were heralded as a major technological revolution in medicine that would yield a wave of new drug discoveries. However, some forty years later the expected benefits have not materialized. I question the narrative of biotechnology as a Schumpeterian revolution by comparing it to the academic research paradigm that preceded it, clinical research in hospitals. I analyze these as distinct research paradigms that involve different epistemologies, practices, and institutional loci. I develop the claim that the complexity of biological systems means that clinical research was well adapted to medical innovation, and that the genetics/molecular biology paradigm imposed a predictive logic to search that was less effective at finding new drugs. The paper describes how drug discovery unfolds in each paradigm: in clinical research, discovery originates with observations of human subjects and proceeds through feedback-based learning, whereas in the genetics model, discovery originates with a precisely-defined molecular target; feedback from patients enters late in the process. The paper reviews the post-War institutional history that witnessed the relative decline of clinical research and the rise of genetics and molecular science in the United States bio-medical research landscape. The history provides a contextual narrative to illustrate that, in contrast to the framing of biotechnology as a Schumpeterian revolution, the adoption of biotechnology as a core drug discovery platform was propelled by institutional changes that were largely disconnected from processes of scientific or technological selection. Implications for current medical policy initiatives and translational science are discussed. © 2016 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "eb0eec2fe000511a37e6487ff51ddb68",
"text": "We report on a laboratory study that compares reading from paper to reading on-line. Critical differences have to do with the major advantages paper offers in supporting annotation while reading, quick navigation, and flexibility of spatial layout. These, in turn, allow readers to deepen their understanding of the text, extract a sense of its structure, create a plan for writing, cross-refer to other documents, and interleave reading and writing. We discuss the design implications of these findings for the development of better reading technologies.",
"title": ""
},
{
"docid": "c943fcc6664681d832133dc8739e6317",
"text": "The explosion in online advertisement urges to better estimate the click prediction of ads. For click prediction on single ad impression, we have access to pairwise relevance among elements in an impression, but not to global interaction among key features of elements. Moreover, the existing method on sequential click prediction treats propagation unchangeable for different time intervals. In this work, we propose a novel model, Convolutional Click Prediction Model (CCPM), based on convolution neural network. CCPM can extract local-global key features from an input instance with varied elements, which can be implemented for not only single ad impression but also sequential ad impression. Experiment results on two public large-scale datasets indicate that CCPM is effective on click prediction.",
"title": ""
},
{
"docid": "bf7eb592ad9ad5e51e61749174b60d04",
"text": "Solving inverse problems continues to be a challenge in a wide array of applications ranging from deblurring, image inpainting, source separation etc. Most existing techniques solve such inverse problems by either explicitly or implicitly finding the inverse of the model. The former class of techniques require explicit knowledge of the measurement process which can be unrealistic, and rely on strong analytical regularizers to constrain the solution space, which often do not generalize well. The latter approaches have had remarkable success in part due to deep learning, but require a large collection of source-observation pairs, which can be prohibitively expensive. In this paper, we propose an unsupervised technique to solve inverse problems with generative adversarial networks (GANs). Using a pre-trained GAN in the space of source signals, we show that one can reliably recover solutions to under determined problems in a ‘blind’ fashion, i.e., without knowledge of the measurement process. We solve this by making successive estimates on the model and the solution in an iterative fashion. We show promising results in three challenging applications – blind source separation, image deblurring, and recovering an image from its edge map, and perform better than several baselines.",
"title": ""
},
{
"docid": "37a5089b7e9e427d330d4720cdcf00d9",
"text": "3D shape models are naturally parameterized using vertices and faces, i.e., composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent ‘geometry images’ representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images. Our code is available at https://github.com/sinhayan/surfnet.",
"title": ""
},
{
"docid": "7082e7b9828c316b24f3113cb516a50d",
"text": "The analog voltage-controlled filter used in historical music synthesizers by Moog is modeled using a digital system, which is then compared in terms of audio measurements with the original analog filter. The analog model is mainly borrowed from D'Angelo's previous work. The digital implementation of the filter incorporates a recently proposed antialiasing method. This method enhances the clarity of output signals in the case of large-level input signals, which cause harmonic distortion. The combination of these two ideas leads to a novel digital model, which represents the state of the art in virtual analog musical filters. It is shown that without the antialiasing, the output signals in the nonlinear regime may be contaminated by undesirable spectral components, which are the consequence of aliasing, but that the antialiasing technique suppresses these components sufficiently. Comparison of measurements of the analog and digital filters show that the digital model is accurate within a few dB in the linear regime and has very similar behavior in the nonlinear regime in terms of distortion. The proposed digital filter model can be used as a building block in virtual analog music synthesizers.",
"title": ""
},
{
"docid": "179ea205964d4f6a13ffbfbf501a189c",
"text": "Mangroves are among the most well described and widely studied wetland communities in the world. The greatest threats to mangrove persistence are deforestation and other anthropogenic disturbances that can compromise habitat stability and resilience to sea-level rise. To persist, mangrove ecosystems must adjust to rising sea level by building vertically or become submerged. Mangroves may directly or indirectly influence soil accretion processes through the production and accumulation of organic matter, as well as the trapping and retention of mineral sediment. In this review, we provide a general overview of research on mangrove elevation dynamics, emphasizing the role of the vegetation in maintaining soil surface elevations (i.e. position of the soil surface in the vertical plane). We summarize the primary ways in which mangroves may influence sediment accretion and vertical land development, for example, through root contributions to soil volume and upward expansion of the soil surface. We also examine how hydrological, geomorphological and climatic processes may interact with plant processes to influence mangrove capacity to keep pace with rising sea level. We draw on a variety of studies to describe the important, and often under-appreciated, role that plants play in shaping the trajectory of an ecosystem undergoing change.",
"title": ""
},
{
"docid": "30b74cdc0d4825957b4125c9ecd5cffe",
"text": "Popular Internet sites are under attack all the time from phishers, fraudsters, and spammers. They aim to steal user information and expose users to unwanted spam. The attackers have vast resources at their disposal. They are well-funded, with full-time skilled labor, control over compromised and infected accounts, and access to global botnets. Protecting our users is a challenging adversarial learning problem with extreme scale and load requirements. Over the past several years we have built and deployed a coherent, scalable, and extensible realtime system to protect our users and the social graph. This Immune System performs realtime checks and classifications on every read and write action. As of March 2011, this is 25B checks per day, reaching 650K per second at peak. The system also generates signals for use as feedback in classifiers and other components. We believe this system has contributed to making Facebook the safest place on the Internet for people and their information. This paper outlines the design of the Facebook Immune System, the challenges we have faced and overcome, and the challenges we continue to face.",
"title": ""
},
{
"docid": "34d7f848427052a1fc5f565a24f628ec",
"text": "This is the solutions manual (web-edition) for the book Pattern Recognition and Machine Learning (PRML; published by Springer in 2006). It contains solutions to the www exercises. This release was created September 8, 2009. Future releases with corrections to errors will be published on the PRML web-site (see below). The authors would like to express their gratitude to the various people who have provided feedback on earlier releases of this document. In particular, the \" Bishop Reading Group \" , held in the Visual Geometry Group at the University of Oxford provided valuable comments and suggestions. The authors welcome all comments, questions and suggestions about the solutions as well as reports on (potential) errors in text or formulae in this document; please send any such feedback to",
"title": ""
},
{
"docid": "fa8c3873cf03af8d4950a0e53f877b08",
"text": "The problem of formal likelihood-based (either classical or Bayesian) inference for discretely observed multi-dimensional diffusions is particularly challenging. In principle this involves data-augmentation of the observation data to give representations of the entire diffusion trajectory. Most currently proposed methodology splits broadly into two classes: either through the discretisation of idealised approaches for the continuous-time diffusion setup; or through the use of standard finite-dimensional methodologies discretisation of the diffusion model. The connections between these approaches have not been well-studied. This paper will provide a unified framework bringing together these approaches, demonstrating connections, and in some cases surprising differences. As a result, we provide, for the first time, theoretical justification for the various methods of imputing missing data. The inference problems are particularly challenging for reducible diffusions, and our framework is correspondingly more complex in that case. Therefore we treat the reducible and irreducible cases differently within the paper. Supplementary materials for the article are avilable on line. 1 Overview of likelihood-based inference for diffusions Diffusion processes have gained much popularity as statistical models for observed and latent processes. Among others, their appeal lies in their flexibility to deal with nonlinearity, time-inhomogeneity and heteroscedasticity by specifying two interpretable functionals, their amenability to efficient computations due to their Markov property, and the rich existing mathematical theory about their properties. As a result, they are used as models throughout Science; some book references related with this approach to modeling include Section 5.3 of [1] for physical systems, Section 8.3.3 (in conjunction with Section 6.3) of [12] for systems biology and mass action stochastic kinetics, and Chapter 10 of [27] for interest rates. A mathematically precise specification of a d-dimensional diffusion process V is as the solution of a stochastic differential equation (SDE) of the type: dVs = b(s, Vs; θ1) ds+ σ(s, Vs; θ2) dBs, s ∈ [0, T ] ; (1) where B is an m-dimensional standard Brownian motion, b(·, · ; · ) : R+ ×Rd ×Θ1 → R is the drift and σ(·, · ; · ) : R+ × R × Θ2 → R is the diffusion coefficient. These ICREA and Department of Economics, Universitat Pompeu Fabra, [email protected] Department of Statistics, University of Warwick Department of Statistics and Actuarial Science, University of Iowa, Iowa City, Iowa",
"title": ""
},
{
"docid": "44017678b3da8c8f4271a9832280201e",
"text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "c2ade16afaf22ac6cc546134a1227d68",
"text": "In this work we present a novel method for the challenging problem of depth image up sampling. Modern depth cameras such as Kinect or Time-of-Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we formulate a convex optimization problem using higher order regularization for depth image up sampling. In this optimization an an isotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the up sampling. We derive a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel up sampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate ground truth, which, for the first time, enable to benchmark depth up sampling methods using real sensor data.",
"title": ""
},
{
"docid": "0bb26233aa8776c6a0db8f2e65bb207a",
"text": "This paper presents methods for suppressing the slugging phenomenon occurring in multiphase flow. The considered systems include industrial oil production facilities such as gas-lifted wells and flowline risers with low-points. Given the difficulty to maintain sensors in deep locations, a particular emphasis is put on observer-based control design. It appears that, without any upstream pressure sensor, such vailable online 23 March 2012 eywords: lugging tabilization bserver a strategy can stabilize the flow. Besides, given a measurement or estimate of the upstream pressure, we propose a control strategy alternative to the classical techniques. The efficiency of these methods is assessed through experiments on a mid-scaled multiphase flow loop. © 2012 Elsevier Ltd. All rights reserved. However, such a simple controller is not always well suited ultiphase flow",
"title": ""
},
{
"docid": "d927e3b1e9bda244bc7b2ccd56b56ff4",
"text": "The formation of healthy gametes requires pairing of homologous chromosomes (homologs) as a prerequisite for their correct segregation during meiosis. Initially, homolog alignment is promoted by meiotic chromosome movements feeding into intimate homolog pairing by homologous recombination and/or synaptonemal complex formation. Meiotic chromosome movements in the fission yeast, Schizosaccharomyces pombe, depend on astral microtubule dynamics that drag the nucleus through the zygote; known as horsetail movement. The response of microtubule-led meiotic chromosome movements to environmental stresses such as ionizing irradiation (IR) and associated reactive oxygen species (ROS) is not known. Here, we show that, in contrast to budding yeast, the horsetail movement is largely radiation-resistant, which is likely mediated by a potent antioxidant defense. IR exposure of sporulating S. pombe cells induced misrepair and irreparable DNA double strand breaks causing chromosome fragmentation, missegregation and gamete death. Comparing radiation outcome in fission and budding yeast, and studying meiosis with poisoned microtubules indicates that the increased gamete death after IR is innate to fission yeast. Inhibition of meiotic chromosome mobility in the face of IR failed to influence the course of DSB repair, indicating that paralysis of meiotic chromosome mobility in a genotoxic environment is not a universal response among species.",
"title": ""
},
{
"docid": "9ac00559a52851ffd2e33e376dd58b62",
"text": "ARM servers are becoming increasingly common, making server technologies such as virtualization for ARM of growing importance. We present the first study of ARM virtualization performance on server hardware, including multicore measurements of two popular ARM and x86 hypervisors, KVM and Xen. We show how ARM hardware support for virtualization can enable much faster transitions between VMs and the hypervisor, a key hypervisor operation. However, current hypervisor designs, including both Type 1 hypervisors such as Xen and Type 2 hypervisors such as KVM, are not able to leverage this performance benefit for real application workloads. We discuss the reasons why and show that other factors related to hypervisor software design and implementation have a larger role in overall performance. Based on our measurements, we discuss changes to ARM's hardware virtualization support that can potentially bridge the gap to bring its faster VM-to-hypervisor transition mechanism to modern Type 2 hypervisors running real applications. These changes have been incorporated into the latest ARM architecture.",
"title": ""
},
{
"docid": "8d9a55b7d730d9acbff50aef4f55808b",
"text": "Interactions between light and matter can be dramatically modified by concentrating light into a small volume for a long period of time. Gaining control over such interaction is critical for realizing many schemes for classical and quantum information processing, including optical and quantum computing, quantum cryptography, and metrology and sensing. Plasmonic structures are capable of confining light to nanometer scales far below the diffraction limit, thereby providing a promising route for strong coupling between light and matter, as well as miniaturization of photonic circuits. At the same time, however, the performance of plasmonic circuits is limited by losses and poor collection efficiency, presenting unique challenges that need to be overcome for quantum plasmonic circuits to become a reality. In this paper, we survey recent progress in controlling emission from quantum emitters using plasmonic structures, as well as efforts to engineer surface plasmon propagation and design plasmonic circuits using these elements.",
"title": ""
},
{
"docid": "1045117f9e6e204ff51ef67a1aff031f",
"text": "Application of models to data is fraught. Data-generating collaborators often only have a very basic understanding of the complications of collating, processing and curating data. Challenges include: poor data collection practices, missing values, inconvenient storage mechanisms, intellectual property, security and privacy. All these aspects obstruct the sharing and interconnection of data, and the eventual interpretation of data through machine learning or other approaches. In project reporting, a major challenge is in encapsulating these problems and enabling goals to be built around the processing of data. Project overruns can occur due to failure to account for the amount of time required to curate and collate. But to understand these failures we need to have a common language for assessing the readiness of a particular data set. This position paper proposes the use of data readiness levels: it gives a rough outline of three stages of data preparedness and speculates on how formalisation of these levels into a common language for data readiness could facilitate project management.",
"title": ""
},
{
"docid": "79be4c64b46eca3c64bdcfbec12720a9",
"text": "We present several new variations on the theme of nonnegative matrix factorization (NMF). Considering factorizations of the form X = FGT, we focus on algorithms in which G is restricted to containing nonnegative entries, but allowing the data matrix X to have mixed signs, thus extending the applicable range of NMF methods. We also consider algorithms in which the basis vectors of F are constrained to be convex combinations of the data points. This is used for a kernel extension of NMF. We provide algorithms for computing these new factorizations and we provide supporting theoretical analysis. We also analyze the relationships between our algorithms and clustering algorithms, and consider the implications for sparseness of solutions. Finally, we present experimental results that explore the properties of these new methods.",
"title": ""
},
{
"docid": "1dc615b299a8a63caa36cd8e36459323",
"text": "Domain adaptation manages to build an effective target classifier or regression model for unlabeled target data by utilizing the well-labeled source data but lying different distributions. Intuitively, to address domain shift problem, it is crucial to learn domain invariant features across domains, and most existing approaches have concentrated on it. However, they often do not directly constrain the learned features to be class discriminative for both source and target data, which is of vital importance for the final classification. Therefore, in this paper, we put forward a novel feature learning method for domain adaptation to construct both domain invariant and class discriminative representations, referred to as DICD. Specifically, DICD is to learn a latent feature space with important data properties preserved, which reduces the domain difference by jointly matching the marginal and class-conditional distributions of both domains, and simultaneously maximizes the inter-class dispersion and minimizes the intra-class scatter as much as possible. Experiments in this paper have demonstrated that the class discriminative properties will dramatically alleviate the cross-domain distribution inconsistency, which further boosts the classification performance. Moreover, we show that exploring both domain invariance and class discriminativeness of the learned representations can be integrated into one optimization framework, and the optimal solution can be derived effectively by solving a generalized eigen-decomposition problem. Comprehensive experiments on several visual cross-domain classification tasks verify that DICD can outperform the competitors significantly.",
"title": ""
},
{
"docid": "58156df07590448d89c2b8d4a46696ad",
"text": "Gene PmAF7DS confers resistance to wheat powdery mildew (isolate Bgt#211 ); it was mapped to a 14.6-cM interval ( Xgwm350 a– Xbarc184 ) on chromosome 7DS. The flanking markers could be applied in MAS breeding. Wheat powdery mildew (Pm) is caused by the biotrophic pathogen Blumeria graminis tritici (DC.) (Bgt). An ongoing threat of breakdown of race-specific resistance to Pm requires a continuous effort to discover new alleles in the wheat gene pool. Developing new cultivars with improved disease resistance is an economically and environmentally safe approach to reduce yield losses. To identify and characterize genes for resistance against Pm in bread wheat we used the (Arina × Forno) RILs population. Initially, the two parental lines were screened with a collection of 61 isolates of Bgt from Israel. Three Pm isolates Bgt#210 , Bgt#211 and Bgt#213 showed differential reactions in the parents: Arina was resistant (IT = 0), whereas Forno was moderately susceptible (IT = −3). Isolate Bgt#211 was then used to inoculate the RIL population. The segregation pattern of plant reactions among the RILs indicates that a single dominant gene controls the conferred resistance. A genetic map of the region containing this gene was assembled with DNA markers and assigned to the 7D physical bin map. The gene, temporarily designated PmAF7DS, was located in the distal region of chromosome arm 7DS. The RILs were also inoculated with Bgt#210 and Bgt#213. The plant reactions to these isolates showed high identity with the reaction to Bgt#211, indicating the involvement of the same gene or closely linked, but distinct single genes. The genomic location of PmAF7DS, in light of other Pm genes on 7DS is discussed.",
"title": ""
}
] |
scidocsrr
|
993e46995cd68116e6a198cfda636f35
|
Certified Defenses for Data Poisoning Attacks
|
[
{
"docid": "f226dccc4a7d83f2869fb3bd37b522e2",
"text": "Poisoning attack is identified as a severe security threat to machine learning algorithms. In many applications, for example, deep neural network (DNN) models collect public data as the inputs to perform re-training, where the input data can be poisoned. Although poisoning attack against support vector machines (SVM) has been extensively studied before, there is still very limited knowledge about how such attack can be implemented on neural networks (NN), especially DNNs. In this work, we first examine the possibility of applying traditional gradient-based method (named as the direct gradient method) to generate poisoned data against NNs by leveraging the gradient of the target model w.r.t. the normal data. We then propose a generative method to accelerate the generation rate of the poisoned data: an auto-encoder (generator) used to generate poisoned data is updated by a reward function of the loss, and the target NN model (discriminator) receives the poisoned data to calculate the loss w.r.t. the normal data. Our experiment results show that the generative method can speed up the poisoned data generation rate by up to 239.38× compared with the direct gradient method, with slightly lower model accuracy degradation. A countermeasure is also designed to detect such poisoning attack methods by checking the loss of the target model.",
"title": ""
},
{
"docid": "53a55e8aa8b3108cdc8d015eabb3476d",
"text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.",
"title": ""
},
{
"docid": "6042abbb698a8d8be6ea87690db9fbd2",
"text": "Machine learning is used in a number of security related applications such as biometric user authentication, speaker identification etc. A type of causative integrity attack against machine le arning called Poisoning attack works by injecting specially crafted data points in the training data so as to increase the false positive rate of the classifier. In the context of the biometric authentication, this means that more intruders will be classified as valid user, and in case of speaker identification system, user A will be classified user B. In this paper, we examine poisoning attack against SVM and introduce Curie a method to protect the SVM classifier from the poisoning attack. The basic idea of our method is to identify the poisoned data points injected by the adversary and filter them out. Our method is light weight and can be easily integrated into existing systems. Experimental results show that it works very well in filtering out the poisoned data.",
"title": ""
}
] |
[
{
"docid": "d5e573802d6519a8da402f2e66064372",
"text": "Targeted cyberattacks play an increasingly significant role in disrupting the online social and economic model, not to mention the threat they pose to nation-states. A variety of components and techniques come together to bring about such attacks.",
"title": ""
},
{
"docid": "1aa01ca2f1b7f5ea8ed783219fe83091",
"text": "This paper presents NetKit, a modular toolkit for classifica tion in networked data, and a case-study of its application to a collection of networked data sets use d in prior machine learning research. Networked data are relational data where entities are inter connected, and this paper considers the common case where entities whose labels are to be estimated a re linked to entities for which the label is known. NetKit is based on a three-component framewo rk, comprising a local classifier, a relational classifier, and a collective inference procedur . Various existing relational learning algorithms can be instantiated with appropriate choices for the se three components and new relational learning algorithms can be composed by new combinations of c omponents. The case study demonstrates how the toolkit facilitates comparison of differen t learning methods (which so far has been lacking in machine learning research). It also shows how the modular framework allows analysis of subcomponents, to assess which, whether, and when partic ul components contribute to superior performance. The case study focuses on the simple but im portant special case of univariate network classification, for which the only information avai lable is the structure of class linkage in the network (i.e., only links and some class labels are avail ble). To our knowledge, no work previously has evaluated systematically the power of class-li nkage alone for classification in machine learning benchmark data sets. The results demonstrate clea rly th t simple network-classification models perform remarkably well—well enough that they shoul d be used regularly as baseline classifiers for studies of relational learning for networked dat a. The results also show that there are a small number of component combinations that excel, and that different components are preferable in different situations, for example when few versus many la be s are known.",
"title": ""
},
{
"docid": "6d31096c16817f13641b23ae808b0dce",
"text": "In the competitive environment of the internet, retaining and growing one's user base is of major concern to most web services. Furthermore, the economic model of many web services is allowing free access to most content, and generating revenue through advertising. This unique model requires securing user time on a site rather than the purchase of good which makes it crucially important to create new kinds of metrics and solutions for growth and retention efforts for web services. In this work, we address this problem by proposing a new retention metric for web services by concentrating on the rate of user return. We further apply predictive analysis to the proposed retention metric on a service, as a means for characterizing lost customers. Finally, we set up a simple yet effective framework to evaluate a multitude of factors that contribute to user return. Specifically, we define the problem of return time prediction for free web services. Our solution is based on the Cox's proportional hazard model from survival analysis. The hazard based approach offers several benefits including the ability to work with censored data, to model the dynamics in user return rates, and to easily incorporate different types of covariates in the model. We compare the performance of our hazard based model in predicting the user return time and in categorizing users into buckets based on their predicted return time, against several baseline regression and classification methods and find the hazard based approach to be superior.",
"title": ""
},
{
"docid": "8cc3af1b9bb2ed98130871c7d5bae23a",
"text": "BACKGROUND\nAnimal experiments have convincingly demonstrated that prenatal maternal stress affects pregnancy outcome and results in early programming of brain functions with permanent changes in neuroendocrine regulation and behaviour in offspring.\n\n\nAIM\nTo evaluate the existing evidence of comparable effects of prenatal stress on human pregnancy and child development.\n\n\nSTUDY DESIGN\nData sources used included a computerized literature search of PUBMED (1966-2001); Psychlit (1987-2001); and manual search of bibliographies of pertinent articles.\n\n\nRESULTS\nRecent well-controlled human studies indicate that pregnant women with high stress and anxiety levels are at increased risk for spontaneous abortion and preterm labour and for having a malformed or growth-retarded baby (reduced head circumference in particular). Evidence of long-term functional disorders after prenatal exposure to stress is limited, but retrospective studies and two prospective studies support the possibility of such effects. A comprehensive model of putative interrelationships between maternal, placental, and fetal factors is presented.\n\n\nCONCLUSIONS\nApart from the well-known negative effects of biomedical risks, maternal psychological factors may significantly contribute to pregnancy complications and unfavourable development of the (unborn) child. These problems might be reduced by specific stress reduction in high anxious pregnant women, although much more research is needed.",
"title": ""
},
{
"docid": "4b78f107ee628cefaeb80296e4f9ae27",
"text": "On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection.\n In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low-diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.\n Our design is implemented in a few hundred lines of X10. On the binomial tree described in olivier:08}, the program achieve 87% efficiency on an Infiniband cluster of 1024 Power7 cores, with a peak throughput of 2.37 GNodes/sec. It achieves 87% efficiency on a Blue Gene/P with 2048 processors, and a peak throughput of 0.966 GNodes/s. All numbers are relative to single core sequential performance. This implementation has been refactored into a reusable global load balancing framework. Applications can use this framework to obtain global load balance with minimal code changes.\n In summary, we claim: (a) the first formulation of UTS that does not involve application level global termination detection, (b) the introduction of lifeline graphs to reduce failed steals (c) the demonstration of simple lifeline graphs based on k-hypercubes, (d) performance with superior efficiency (or the same efficiency but over a wider range) than published results on UTS. In particular, our framework can deliver the same or better performance as an unrestricted random work-stealing implementation, while reducing the number of attempted steals.",
"title": ""
},
{
"docid": "821cefef9933d6a02ec4b9098f157062",
"text": "Scientists debate whether people grow closer to their friends through social networking sites like Facebook, whether those sites displace more meaningful interaction, or whether they simply reflect existing ties. Combining server log analysis and longitudinal surveys of 3,649 Facebook users reporting on relationships with 26,134 friends, we find that communication on the site is associated with changes in reported relationship closeness, over and above effects attributable to their face-to-face, phone, and email contact. Tie strength increases with both one-on-one communication, such as posts, comments, and messages, and through reading friends' broadcasted content, such as status updates and photos. The effect is greater for composed pieces, such as comments, posts, and messages than for 'one-click' actions such as 'likes.' Facebook has a greater impact on non-family relationships and ties who do not frequently communicate via other channels.",
"title": ""
},
{
"docid": "b73526f1fb0abb4373421994dbd07822",
"text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.",
"title": ""
},
{
"docid": "2abd75766d4875921edd4d6d63d5d617",
"text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.",
"title": ""
},
{
"docid": "fdc16a2774921124576c8399de2701d4",
"text": "This paper discusses a method of frequency-shift keying (FSK) demodulation and Manchester-bit decoding using a digital signal processing (DSP) approach. The demodulator is implemented on a single-channel high-speed digital radio board. The board architecture contains a high-speed A/D converter, a digital receiver chip, a host DSP processing chip, and a back-end D/A converter [2]. The demodulator software is booted off an on-board EPROM and run on the DSP chip [3]. The algorithm accepts complex digital baseband data available from the front-end digital receiver chip [2]. The target FSK modulation is assumed to be in the RF range (VHF or UHF signals). A block diagram of the single-channel digital radio is shown in Figure 1 [2].",
"title": ""
},
{
"docid": "99c088268633c19a8c4789c58c4c9aca",
"text": "Executing agile quadrotor maneuvers with cablesuspended payloads is a challenging problem and complications induced by the dynamics typically require trajectory optimization. State-of-the-art approaches often need significant computation time and complex parameter tuning. We present a novel dynamical model and a fast trajectory optimization algorithm for quadrotors with a cable-suspended payload. Our first contribution is a new formulation of the suspended payload behavior, modeled as a link attached to the quadrotor with a combination of two revolute joints and a prismatic joint, all being passive. Differently from state of the art, we do not require the use of hybrid modes depending on the cable tension. Our second contribution is a fast trajectory optimization technique for the aforementioned system. Our model enables us to pose the trajectory optimization problem as a Mathematical Program with Complementarity Constraints (MPCC). Desired behaviors of the system (e.g., obstacle avoidance) can easily be formulated within this framework. We show that our approach outperforms the state of the art in terms of computation speed and guarantees feasibility of the trajectory with respect to both the system dynamics and control input saturation, while utilizing far fewer tuning parameters. We experimentally validate our approach on a real quadrotor showing that our method generalizes to a variety of tasks, such as flying through desired waypoints while avoiding obstacles, or throwing the payload toward a desired target. To the best of our knowledge, this is the first time that three-dimensional, agile maneuvers exploiting the system dynamics have been achieved on quadrotors with a cable-suspended payload. SUPPLEMENTARY MATERIAL This paper is accompanied by a video showcasing the experiments: https://youtu.be/s9zb5MRXiHA",
"title": ""
},
{
"docid": "2ca0c604b449e1495bd57d96381e0e1f",
"text": "The data ̄ow program graph execution model, or data ̄ow for short, is an alternative to the stored-program (von Neumann) execution model. Because it relies on a graph representation of programs, the strengths of the data ̄ow model are very much the complements of those of the stored-program one. In the last thirty or so years since it was proposed, the data ̄ow model of computation has been used and developed in very many areas of computing research: from programming languages to processor design, and from signal processing to recon®gurable computing. This paper is a review of the current state-of-the-art in the applications of the data ̄ow model of computation. It focuses on three areas: multithreaded computing, signal processing and recon®gurable computing. Ó 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "24041042e1216a3bbf6aab89fa6f0b93",
"text": "With the increasing demand for renewable energy, distributed power included in fuel cells have been studied and developed as a future energy source. For this system, a power conversion circuit is necessary to interface the generated power to the utility. In many cases, a high step-up DC/DC converter is needed to boost low input voltage to high voltage output. Conventional methods using cascade DC/DC converters cause extra complexity and higher cost. The conventional topologies to get high output voltage use flyback DC/DC converters. They have the leakage components that cause stress and loss of energy that results in low efficiency. This paper presents a high boost converter with a voltage multiplier and a coupled inductor. The secondary voltage of the coupled inductor is rectified using a voltage multiplier. High boost voltage is obtained with low duty cycle. Theoretical analysis and experimental results verify the proposed solutions using a 300 W prototype.",
"title": ""
},
{
"docid": "751644f811112a4ac7f1ead5f456056b",
"text": "Camera-based text processing has attracted considerable attention and numerous methods have been proposed. However, most of these methods have focused on the scene text detection problem and relatively little work has been performed on camera-captured document images. In this paper, we present a text-line detection algorithm for camera-captured document images, which is an essential step toward document understanding. In particular, our method is developed by incorporating state estimation (an extension of scale selection) into a connected component (CC)-based framework. To be precise, we extract CCs with the maximally stable extremal region algorithm and estimate the scales and orientations of CCs from their projection profiles. Since this state estimation facilitates a merging process (bottom-up clustering) and provides a stopping criterion, our method is able to handle arbitrarily oriented text-lines and works robustly for a range of scales. Finally, a text-line/non-text-line classifier is trained and non-text candidates (e.g., background clutters) are filtered out with the classifier. Experimental results show that the proposed method outperforms conventional methods on a standard dataset and works well for a new challenging dataset.",
"title": ""
},
{
"docid": "efba71635ca38b4588d3e4200d655fee",
"text": "BACKGROUND\nCircumcisions and cesarian sections are common procedures. Although complications to the newborn child fortunately are rare, it is important to emphasize the potential significance of this problem and its frequent iatrogenic etiology. The authors present 7 cases of genitourinary trauma in newborns, including surgical management and follow-up.\n\n\nMETHODS\nThe authors relate 7 recent cases of genitourinary trauma in newborns from a children's hospital in a major metropolitan area.\n\n\nRESULTS\nCase 1 and 2: Two infants suffered degloving injuries to both the prepuce and penile shaft from a Gomco clamp. Successful full-thickness skin grafting using the previously excised foreskin was used in 1 child. Case 3, 4, and 5: A Mogen clamp caused glans injuries in 3 infants. In 2, hemorrhage from the severed glans was controlled with topical epinephrine; the glans healed with a flattened appearance. Another infant sustained a laceration ventrally, requiring a delayed modified meatal advancement glanoplasty to correct the injury. Case 6: A male infant suffered a ventral slit and division of the ventral urethra before placement of a Gomco clamp. Formal hypospadias repair was required. Case 7: An emergent cesarean section resulted in a grade 4-perineal laceration in a female infant. The vaginal tear caused by the surgeon's finger, extended up to the posterior insertion of the cervix and into the rectum. The infant successfully underwent an emergent multilayered repair.\n\n\nCONCLUSIONS\nGenitourinary trauma in the newborn is rare but often necessitates significant surgical intervention. Circumcision often is the causative event. There has been only 1 prior report of a perineal injury similar to case 7, with a fatal outcome.",
"title": ""
},
{
"docid": "d0f14357e0d675c99d4eaa1150b9c55e",
"text": "Purpose – The purpose of this research is to investigate if, and in that case, how and what the egovernment field can learn from user participation concepts and theories in general IS research. We aim to contribute with further understanding of the importance of citizen participation and involvement within the e-government research body of knowledge and when developing public eservices in practice. Design/Methodology/Approach – The analysis in the article is made from a comparative, qualitative case study of two e-government projects. Three analysis themes are induced from the literature review; practice of participation, incentives for participation, and organization of participation. These themes are guiding the comparative analysis of our data with a concurrent openness to interpretations from the field. Findings – The main results in this article are that the e-government field can get inspiration and learn from methods and approaches in traditional IS projects concerning user participation, but in egovernment we also need methods to handle the challenges that arise when designing public e-services for large, heterogeneous user groups. Citizen engagement cannot be seen as a separate challenge in egovernment, but rather as an integrated part of the process of organizing, managing, and performing egovernment projects. Our analysis themes of participation generated from literature; practice, incentives and organization can be used in order to highlight, analyze, and discuss main issues regarding the challenges of citizen participation within e-government. This is an important implication based on our study that contributes both to theory on and practice of e-government. Practical implications – Lessons to learn from this study concern that many e-government projects have a public e-service as one outcome and an internal e-administration system as another outcome. A dominating internal, agency perspective in such projects might imply that citizens as the user group of the e-service are only seen as passive receivers of the outcome – not as active participants in the development. By applying the analysis themes, proposed in this article, citizens as active participants can be thoroughly discussed when initiating (or evaluating) an e-government project. Originality/value – This article addresses challenges regarding citizen participation in e-government development projects. User participation is well-researched within the IS discipline, but the egovernment setting implies new challenges, that are not explored enough.",
"title": ""
},
{
"docid": "56a4a9b20391f13e7ced38586af9743b",
"text": "The most common type of nasopharyngeal tumor is nasopharyngeal carcinoma. The etiology is multifactorial with race, genetics, environment and Epstein-Barr virus (EBV) all playing a role. While rare in Caucasian populations, it is one of the most frequent nasopharyngeal cancers in Chinese, and has endemic clusters in Alaskan Eskimos, Indians, and Aleuts. Interestingly, as native-born Chinese migrate, the incidence diminishes in successive generations, although still higher than the native population. EBV is nearly always present in NPC, indicating an oncogenic role. There are raised antibodies, higher titers of IgA in patients with bulky (large) tumors, EBERs (EBV encoded early RNAs) in nearly all tumor cells, and episomal clonal expansion (meaning the virus entered the tumor cell before clonal expansion). Consequently, the viral titer can be used to monitor therapy or possibly as a diagnostic tool in the evaluation of patients who present with a metastasis from an unknown primary. The effect of environmental carcinogens, especially those which contain a high levels of volatile nitrosamines are also important in the etiology of NPC. Chinese eat salted fish, specifically Cantonese-style salted fish, and especially during early life. Perhaps early life (weaning period) exposure is important in the ‘‘two-hit’’ hypothesis of cancer development. Smoking, cooking, and working under poor ventilation, the use of nasal oils and balms for nose and throat problems, and the use of herbal medicines have also been implicated but are in need of further verification. Likewise, chemical fumes, dusts, formaldehyde exposure, and radiation have all been implicated in this complicated disorder. Various human leukocyte antigens (HLA) are also important etiologic or prognostic indicators in NPC. While histocompatibility profiles of HLA-A2, HLA-B17 and HLA-Bw46 show increased risk for developing NPC, there is variable expression depending on whether they occur alone or jointly, further conferring a variable prognosis (B17 is associated with a poor and A2B13 with a good prognosis, respectively).",
"title": ""
},
{
"docid": "320925a50d9fe1e4f76180b7d141dd27",
"text": "extraction from documents J. Fan A. Kalyanpur D. C. Gondek D. A. Ferrucci Access to a large amount of knowledge is critical for success at answering open-domain questions for DeepQA systems such as IBM Watsoni. Formal representation of knowledge has the advantage of being easy to reason with, but acquisition of structured knowledge in open domains from unstructured data is often difficult and expensive. Our central hypothesis is that shallow syntactic knowledge and its implied semantics can be easily acquired and can be used in many areas of a question-answering system. We take a two-stage approach to extract the syntactic knowledge and implied semantics. First, shallow knowledge from large collections of documents is automatically extracted. Second, additional semantics are inferred from aggregate statistics of the automatically extracted shallow knowledge. In this paper, we describe in detail what kind of shallow knowledge is extracted, how it is automatically done from a large corpus, and how additional semantics are inferred from aggregate statistics. We also briefly discuss the various ways extracted knowledge is used throughout the IBM DeepQA system.",
"title": ""
},
{
"docid": "8bb5a38908446ca4e6acb4d65c4c817c",
"text": "Column-oriented database systems have been a real game changer for the industry in recent years. Highly tuned and performant systems have evolved that provide users with the possibility of answering ad hoc queries over large datasets in an interactive manner. In this paper we present the column-oriented datastore developed as one of the central components of PowerDrill. It combines the advantages of columnar data layout with other known techniques (such as using composite range partitions) and extensive algorithmic engineering on key data structures. The main goal of the latter being to reduce the main memory footprint and to increase the efficiency in processing typical user queries. In this combination we achieve large speed-ups. These enable a highly interactive Web UI where it is common that a single mouse click leads to processing a trillion values in the underlying dataset.",
"title": ""
},
{
"docid": "0793d82c1246c777dce673d8f3146534",
"text": "CONTEXT\nMedical schools are known to be stressful environments for students and hence medical students have been believed to experience greater incidences of depression than others. We evaluated the global prevalence of depression amongst medical students, as well as epidemiological, psychological, educational and social factors in order to identify high-risk groups that may require targeted interventions.\n\n\nMETHODS\nA systematic search was conducted in online databases for cross-sectional studies examining prevalences of depression among medical students. Studies were included only if they had used standardised and validated questionnaires to evaluate the prevalence of depression in a group of medical students. Random-effects models were used to calculate the aggregate prevalence and pooled odds ratios (ORs). Meta-regression was carried out when heterogeneity was high.\n\n\nRESULTS\nFindings for a total of 62 728 medical students and 1845 non-medical students were pooled across 77 studies and examined. Our analyses demonstrated a global prevalence of depression amongst medical students of 28.0% (95% confidence interval [CI] 24.2-32.1%). Female, Year 1, postgraduate and Middle Eastern medical students were more likely to be depressed, but the differences were not statistically significant. By year of study, Year 1 students had the highest rates of depression at 33.5% (95% CI 25.2-43.1%); rates of depression then gradually decreased to reach 20.5% (95% CI 13.2-30.5%) at Year 5. This trend represented a significant decline (B = - 0.324, p = 0.005). There was no significant difference in prevalences of depression between medical and non-medical students. The overall mean frequency of suicide ideation was 5.8% (95% CI 4.0-8.3%), but the mean proportion of depressed medical students who sought treatment was only 12.9% (95% CI 8.1-19.8%).\n\n\nCONCLUSIONS\nDepression affects almost one-third of medical students globally but treatment rates are relatively low. The current findings suggest that medical schools and health authorities should offer early detection and prevention programmes, and interventions for depression amongst medical students before graduation.",
"title": ""
}
] |
scidocsrr
|
301bcd4222a0c21bb75b8e4714797acf
|
Visual data mining in software archives
|
[
{
"docid": "d45e3bace2d24dd2b33b13328eacc499",
"text": "A sequential pattern in data mining is a finite series of elements such as A → B → C → D where A, B, C, and D are elements of the same domain. The mining of sequential patterns is designed to find patterns of discrete events that frequently happen in the same arrangement along a timeline. Like association and clustering, the mining of sequential patterns is among the most popular knowledge discovery techniques that apply statistical measures to extract useful information from large datasets. As our computers become more powerful, we are able to mine bigger datasets and obtain hundreds of thousands of sequential patterns in full detail. With this vast amount of data, we argue that neither data mining nor visualization by itself can manage the information and reflect the knowledge effectively. Subsequently, we apply visualization to augment data mining in a study of sequential patterns in large text corpora. The result shows that we can learn more and more quickly in an integrated visual datamining environment.",
"title": ""
}
] |
[
{
"docid": "074fd9d0c7bd9e5f31beb77c140f61d0",
"text": "In this chapter, we examine the self and identity by considering the different conditions under which these are affected by the groups to which people belong. From a social identity perspective we argue that group commitment, on the one hand, and features of the social context, on the other hand, are crucial determinants of central identity concerns. We develop a taxonomy of situations to reflect the different concerns and motives that come into play as a result of threats to personal and group identity and degree of commitment to the group. We specify for each cell in this taxonomy how these issues of self and social identity impinge upon a broad variety of responses at the perceptual, affective, and behavioral level.",
"title": ""
},
{
"docid": "20b6d457acf80a2171880ca312def57f",
"text": "Recent evidence points to a possible overlap in the neural systems underlying the distressing experience that accompanies physical pain and social rejection (Eisenberger et al., 2003). The present study tested two hypotheses that stem from this suggested overlap, namely: (1) that baseline sensitivity to physical pain will predict sensitivity to social rejection and (2) that experiences that heighten social distress will heighten sensitivity to physical pain as well. In the current study, participants' baseline cutaneous heat pain unpleasantness thresholds were assessed prior to the completion of a task that manipulated feelings of social distress. During this task, participants played a virtual ball-tossing game, allegedly with two other individuals, in which they were either continuously included (social inclusion condition) or they were left out of the game by either never being included or by being overtly excluded (social rejection conditions). At the end of the game, three pain stimuli were delivered and participants rated the unpleasantness of each. Results indicated that greater baseline sensitivity to pain (lower pain unpleasantness thresholds) was associated with greater self-reported social distress in response to the social rejection conditions. Additionally, for those in the social rejection conditions, greater reports of social distress were associated with greater reports of pain unpleasantness to the thermal stimuli delivered at the end of the game. These results provide additional support for the hypothesis that pain distress and social distress share neurocognitive substrates. Implications for clinical populations are discussed.",
"title": ""
},
{
"docid": "45252c6ffe946bf0f9f1984f60ffada6",
"text": "Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates. In this work we reparameterize discrete variational auto-encoders using the Gumbel-Max perturbation model that represents the Gibbs distribution using the arg max of randomly perturbed encoder. We subsequently apply the direct loss minimization technique to propagate gradients through the reparameterized arg max. The resulting gradient is estimated by the difference of the encoder gradients that are evaluated in two arg max predictions.",
"title": ""
},
{
"docid": "076adb210e56d34225d302baa0183c1c",
"text": "It has long been recognised that sleep is more than a passive recovery state or mere inconvenience. Sleep plays an active role in maintaining a healthy body, brain and mind, and numerous studies suggest that sleep is integral to learning and memory. The importance of sleep for cognition is clear in studies of those experiencing sleep deprivation, who show consistent deficits across cognitive domains in relation to nonsleep-deprived controls, particularly in tasks that tax attention or executive functions (1). Poor sleep has been associated with poor grades, and academic performance suffers when sleep is sacrificed for extra study (2). Thus, it is perhaps unsurprising that children with developmental disorders of learning and cognition often suffer from sleep disturbances. These have been well documented in children with autism and attention-deficit/hyperactivity disorder (ADHD), where sleep problems can be particularly severe. However, a growing body of evidence suggests that sleep can be atypical across a spectrum of learning disorders. Understanding the ways in which sleep is affected in different developmental disorders can not only support the design and implementation of effective assessment and remediation programs, but can also inform theories of how sleep supports cognition in typical development. The study by Carotenuto et al. (3) in this issue makes a valuable contribution to this literature by looking at sleep disturbances in children with developmental dyslexia. Dyslexia is the most common specific learning disorder, affecting around one in 10 children in our classrooms. It is characterised by difficulties with reading and spelling and is primarily caused by a deficit in phonological processing. However, dyslexia often co-occurs with other developmental disorders, such as ADHD and specific language impairment, and there can be striking heterogeneity between children. This has led to the suggestion that dyslexia can result from complex combinations of multiple risk factors and impairments (4). Consequently, research attention is turning towards the wider constellation of subclinical difficulties often experienced by children with dyslexia, including potential sleep problems. Two preliminary studies have found differences in the sleep architecture of children with dyslexia in comparison with typical peers, using overnight sleep EEG recordings (polysomnography) (5,6). Notably, children with dyslexia showed unusually long periods of slow wave sleep and an increased number of sleep spindles. Slow wave sleep and spindles are related to language learning, most notably through promoting the consolidation of new vocabulary (7). Children with dyslexia have pronounced deficits in learning new oral vocabulary, providing a plausible theoretical link between sleep disturbances and language difficulties. If sleep problems do in fact exacerbate the learning difficulties associated with dyslexia, as well as impacting on daily cognitive function, this could have important implications for intervention and support programs. However, an important first step is to establish the nature and extent of sleep disturbances in dyslexia. Previous studies (5,6) have used small samples (N = <30) and examined a large array of sleep parameters on a small number of unusual nights (where children were wearing sleep recording equipment), as opposed to looking at global patterns over time. As such, how representative these findings are is questionable, and consequently these studies should be viewed as hypothesis-generating rather than hypothesis-testing. Carotenuto et al. (3) address some of these concerns, administering questionnaire measures of sleep habits to the parents of 147 children with dyslexia and 766 children without dyslexia, aged 8–12 years. A sample of this size allows for a robust analysis of sleep characteristics. Therefore, their findings that children with dyslexia showed higher rates of several markers of sleep disorders lend significant weight to suggestions that dyslexia might be associated with an increased risk for sleep problems. Importantly, the sleep questionnaire used by Carotenuto et al. (3) allows for a breakdown of sleep disturbances. It is interesting to note that they found the greatest difficulties in initiating and maintaining sleep, sleep breathing disorders and disorders of arousal. This closely mirrors the types of sleep problem documented in children with ADHD (8). While Carotenuto et al. (3) took care to exclude children with comorbid diagnoses, many children with dyslexia show subtle features of attention disorders that do not reach clinical thresholds. Future studies that can establish whether sleep disturbances are associated with subclinical attention problems or dyslexia per se will be particularly informative for understanding which cognitive skills most critically relate to sleep. This is also vital information for",
"title": ""
},
{
"docid": "bd21815804115f2c413265660a78c203",
"text": "Outsourcing, internationalization, and complexity characterize today's aerospace supply chains, making aircraft manufacturers structurally dependent on each other. Despite several complexity-related supply chain issues reported in the literature, aerospace supply chain structure has not been studied due to a lack of empirical data and suitable analytical toolsets for studying system structure. In this paper, we assemble a large-scale empirical data set on the supply network of Airbus and apply the new science of networks to analyze how the industry is structured. Our results show that the system under study is a network, formed by communities connected by hub firms. Hub firms also tend to connect to each other, providing cohesiveness, yet making the network vulnerable to disruptions in them. We also show how network science can be used to identify firms that are operationally critical and that are key to disseminating information.",
"title": ""
},
{
"docid": "3e1023f2ff554d7cb3e5e02ba4181237",
"text": "Convolutional neural network (CNN) offers significant accuracy in image detection. To implement image detection using CNN in the Internet of Things (IoT) devices, a streaming hardware accelerator is proposed. The proposed accelerator optimizes the energy efficiency by avoiding unnecessary data movement. With unique filter decomposition technique, the accelerator can support arbitrary convolution window size. In addition, max-pooling function can be computed in parallel with convolution by using separate pooling unit, thus achieving throughput improvement. A prototype accelerator was implemented in TSMC 65-nm technology with a core size of 5 mm2. The accelerator can support major CNNs and achieve 152GOPS peak throughput and 434GOPS/W energy efficiency at 350 mW, making it a promising hardware accelerator for intelligent IoT devices.",
"title": ""
},
{
"docid": "f765a0c29c6d553ae1c7937b48416e9c",
"text": "Although the topic of psychological well-being has generated considerable research, few studies have investigated how adults themselves define positive functioning. To probe their conceptions of well-being, interviews were conducted with a community sample of 171 middle-aged (M = 52.5 years, SD = 8.7) and older (M = 73.5 years, SD = 6.1) men and women. Questions pertained to general life evaluations, past life experiences, conceptions of well-being, and views of the aging process. Responses indicated that both age groups and sexes emphasized an \"others orientation\" (being a caring, compassionate person, and having good relationships) in defining well-being. Middle-aged respondents stressed self-confidence, self-acceptance, and self-knowledge, whereas older persons cited accepting change as an important quality of positive functioning. In addition to attention to positive relations with others as an index of well-being, lay views pointed to a sense of humor, enjoying life, and accepting change as criteria of successful aging.",
"title": ""
},
{
"docid": "52f4b881941ba82bd8505aca6326821c",
"text": "Labview and National Instruments hardware is used to measure, analyze and solve multiple Industry problems, mostly in small mechatronics systems or fixed manipulators. myRIO have been used worldwide in the last few years to provide a reliable data acquisition. While in Industry and in Universities myRIO is vastly used, Arduino is still the most common tool for hobby or student based projects, therefore Mobile Robotics platforms integrate Arduino more often than myRIO. In this study, an overall hardware description will be presented, together with the software designed for autonomous and remote navigation in unknown scenarios. The designed robot was used in EuroSkills 2016 competition in Sweden.",
"title": ""
},
{
"docid": "e0db3c5605ea2ea577dda7d549e837ae",
"text": "This paper presents a system based on new operators for handling sets of propositional clauses represented by means of ZBDDs. The high compression power of such data structures allows efficient encodings of structured instances. A specialized operator for the distribution of sets of clauses is introduced and used for performing multi-resolution on clause sets. Cut eliminations between sets of clauses of exponential size may then be performed using polynomial size data structures. The ZRES system, a new implementation of the Davis-Putnam procedure of 1960, solves two hard problems for resolution, that are currently out of the scope of the best SAT provers.",
"title": ""
},
{
"docid": "7709c76755f61920182653774721a47b",
"text": "Game-based learning (GBL) combines pedagogy and interactive entertainment to create a virtual learning environment in an effort to motivate and regain the interest of a new generation of ‘digital native’ learners. However, this approach is impeded by the limited availability of suitable ‘serious’ games and high-level design tools to enable domain experts to develop or customise serious games. Model Driven Engineering (MDE) goes some way to provide the techniques required to generate a wide variety of interoperable serious games software solutions whilst encapsulating and shielding the technicality of the full software development process. In this paper, we present our Game Technology Model (GTM) which models serious game software in a manner independent of any hardware or operating platform specifications for use in our Model Driven Serious Game Development Framework.",
"title": ""
},
{
"docid": "e49515145975eadccc20b251d56f0140",
"text": "High mortality of nestling cockatiels (Nymphicus hollandicus) was observed in one breeding flock in Slovakia. The nestling mortality affected 50% of all breeding pairs. In general, all the nestlings in affected nests died. Death occurred suddenly in 4to 6-day-old birds, most of which had full crops. No feather disorders were diagnosed in this flock. Two dead nestlings were tested by nested PCR for the presence of avian polyomavirus (APV) and Chlamydophila psittaci and by single-round PCR for the presence of beak and feather disease virus (BFDV). After the breeding season ended, a breeding pair of cockatiels together with their young one and a fledgling budgerigar (Melopsittacus undulatus) were examined. No clinical alterations were observed in these birds. Haemorrhages in the proventriculus and irregular foci of yellow liver discoloration were found during necropsy in the young cockatiel and the fledgling budgerigar. Microscopy revealed liver necroses and acute haemolysis in the young cockatiel and confluent liver necroses and heart and kidney haemorrhages in the budgerigar. Two dead cockatiel nestlings, the young cockatiel and the fledgling budgerigar were tested positive for APV, while the cockatiel adults were negative. The presence of BFDV or Chlamydophila psittaci DNA was detected in none of the birds. The specificity of PCR was confirmed by the sequencing of PCR products amplified from the samples from the young cockatiel and the fledgling budgerigar. The sequences showed 99.6–100% homology with the previously reported sequences. To our knowledge, this is the first report of APV infection which caused a fatal disease in parent-raised cockatiel nestlings and merely subclinical infection in budgerigar nestlings.",
"title": ""
},
{
"docid": "7f84e215df3d908249bde3be7f2b3cab",
"text": "With the emergence of ever-growing advanced vehicular applications, the challenges to meet the demands from both communication and computation are increasingly prominent. Without powerful communication and computational support, various vehicular applications and services will still stay in the concept phase and cannot be put into practice in the daily life. Thus, solving this problem is of great importance. The existing solutions, such as cellular networks, roadside units (RSUs), and mobile cloud computing, are far from perfect because they highly depend on and bear the cost of additional infrastructure deployment. Given tremendous number of vehicles in urban areas, putting these underutilized vehicular resources into use offers great opportunity and value. Therefore, we conceive the idea of utilizing vehicles as the infrastructures for communication and computation, named vehicular fog computing (VFC), which is an architecture that utilizes a collaborative multitude of end-user clients or near-user edge devices to carry out communication and computation, based on better utilization of individual communication and computational resources of each vehicle. By aggregating abundant resources of individual vehicles, the quality of services and applications can be enhanced greatly. In particular, by discussing four types of scenarios of moving and parked vehicles as the communication and computational infrastructures, we carry on a quantitative analysis of the capacities of VFC. We unveil an interesting relationship among the communication capability, connectivity, and mobility of vehicles, and we also find out the characteristics about the pattern of parking behavior, which benefits from the understanding of utilizing the vehicular resources. Finally, we discuss the challenges and open problems in implementing the proposed VFC system as the infrastructures. Our study provides insights for this novel promising paradigm, as well as research topics about vehicular information infrastructures.",
"title": ""
},
{
"docid": "77f5216ede8babf4fb3b2bcbfc9a3152",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "515e2b726f0e5e7ceb5938fa5d917694",
"text": "Text preprocessing and segmentation are critical tasks in search and text mining applications. Due to the huge amount of documents that are exclusively presented in PDF format, most of the Data Mining (DM) and Information Retrieval (IR) systems must extract content from the PDF files. In some occasions this is a difficult task: the result of the extraction process from a PDF file is plain text, and it should be returned in the same order as a human would read the original PDF file. However, current tools for PDF text extraction fail in this objective when working with complex documents with multiple columns. For instance, this is the case of official government bulletins with legal information. In this task, it is mandatory to get correct and ordered text as a result of the application of the PDF extractor. It is very usual that a legal article in a document refers to a previous article and they should be offered in the right sequential order. To overcome these difficulties we have designed a new method for extraction of text in PDFs that simulates the human reading order. We evaluated our method and compared it against other PDF extraction tools and algorithms. Evaluation of our approach shows that it significantly outperforms the results of the existing tools and algorithms.",
"title": ""
},
{
"docid": "70a7aa831b2036a50de1751ed1ace6d9",
"text": "Short stature and later maturation of youth artistic gymnasts are often attributed to the effects of intensive training from a young age. Given limitations of available data, inadequate specification of training, failure to consider other factors affecting growth and maturation, and failure to address epidemiological criteria for causality, it has not been possible thus far to establish cause-effect relationships between training and the growth and maturation of young artistic gymnasts. In response to this ongoing debate, the Scientific Commission of the International Gymnastics Federation (FIG) convened a committee to review the current literature and address four questions: (1) Is there a negative effect of training on attained adult stature? (2) Is there a negative effect of training on growth of body segments? (3) Does training attenuate pubertal growth and maturation, specifically, the rate of growth and/or the timing and tempo of maturation? (4) Does training negatively influence the endocrine system, specifically hormones related to growth and pubertal maturation? The basic information for the review was derived from the active involvement of committee members in research on normal variation and clinical aspects of growth and maturation, and on the growth and maturation of artistic gymnasts and other youth athletes. The committee was thus thoroughly familiar with the literature on growth and maturation in general and of gymnasts and young athletes. Relevant data were more available for females than males. Youth who persisted in the sport were a highly select sample, who tended to be shorter for chronological age but who had appropriate weight-for-height. Data for secondary sex characteristics, skeletal age and age at peak height velocity indicated later maturation, but the maturity status of gymnasts overlapped the normal range of variability observed in the general population. Gymnasts as a group demonstrated a pattern of growth and maturation similar to that observed among short-, normal-, late-maturing individuals who were not athletes. Evidence for endocrine changes in gymnasts was inadequate for inferences relative to potential training effects. Allowing for noted limitations, the following conclusions were deemed acceptable: (1) Adult height or near adult height of female and male artistic gymnasts is not compromised by intensive gymnastics training. (2) Gymnastics training does not appear to attenuate growth of upper (sitting height) or lower (legs) body segment lengths. (3) Gymnastics training does not appear to attenuate pubertal growth and maturation, neither rate of growth nor the timing and tempo of the growth spurt. (4) Available data are inadequate to address the issue of intensive gymnastics training and alterations within the endocrine system.",
"title": ""
},
{
"docid": "04065494023ed79211af3ba0b5bc4c7e",
"text": "The glucagon-like peptides include glucagon, GLP-1, and GLP-2, and exert diverse actions on nutrient intake, gastrointestinal motility, islet hormone secretion, cell proliferation and apoptosis, nutrient absorption, and nutrient assimilation. GIP, a related member of the glucagon peptide superfamily, also regulates nutrient disposal via stimulation of insulin secretion. The actions of these peptides are mediated by distinct members of the glucagon receptor superfamily of G protein-coupled receptors. These receptors exhibit unique patterns of tissue-specific expression, exhibit considerable amino acid sequence identity, and share similar structural and functional properties with respect to ligand binding and signal transduction. This article provides an overview of the biology of these receptors with an emphasis on understanding the unique actions of glucagon-related peptides through studies of the biology of their cognate receptors.",
"title": ""
},
{
"docid": "81667ba5e59bd04d979b2206b54b5b32",
"text": "Parallelism is an important rhetorical device. We propose a machine learning approach for automated sentence parallelism identification in student essays. We b uild an essay dataset with sentence level parallelism annotated. We derive features by combining gen eralized word alignment strategies and the alignment measures between word sequences. The experiment al r sults show that sentence parallelism can be effectively identified with a F1 score of 82% at pair-wise level and 72% at parallelism chunk l evel. Based on this approach, we automatically identify sentence parallelism in more than 2000 student essays and study the correlation between the use of sentence parall elism and the types and quality of essays.",
"title": ""
}
] |
scidocsrr
|
8a919d443345e198dd9c43fcac05a358
|
A lightweight anomaly detection framework for medical wireless sensor networks
|
[
{
"docid": "4b54527aa8554eae373e4b19e6774467",
"text": "In this paper, we proposed an integrated biometric-based security framework for wireless body area networks, which takes advantage of biometric features shared by body sensors deployed at different positions of a person's body. The data communications among these sensors are secured via the proposed authentication and selective encryption schemes that only require low computational power and less resources (e.g., battery and bandwidth). Specifically, a wavelet-domain Hidden Markov Model (HMM) classification is utilized by considering the non-Gaussian statistics of ECG signals for accurate authentication. In addition, the biometric information such as ECG parameters is selected as the biometric key for the encryption in the framework. Our experimental results demonstrated that the proposed approach can achieve more accurate authentication performance without extra requirements of key distribution and strict time synchronization.",
"title": ""
}
] |
[
{
"docid": "5090070d6d928b83bd22d380f162b0a6",
"text": "The Federal Aviation Administration (FAA) has been increasing the National Airspace System (NAS) capacity to accommodate the predicted rapid growth of air traffic. One method to increase the capacity is reducing air traffic controller workload so that they can handle more air traffic. It is crucial to measure the impact of the increasing future air traffic on controller workload. Our experimental data show a linear relationship between the number of aircraft in the en route center sector and controllers’ perceived workload. Based on the extensive range of aircraft count from 14 to 38 in the experiment, we can predict en route center controllers working as a team of Radar and Data controllers with the automation tools available in the our experiment could handle up to about 28 aircraft. This is 33% more than the 21 aircraft that en route center controllers typically handle in a busy sector.",
"title": ""
},
{
"docid": "481931c78a24020a02245075418a26c3",
"text": "Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and provides greater one-step value of information than in the derivative-free setting. d-KG accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.",
"title": ""
},
{
"docid": "59084b05271efe4b22dd490958622c1e",
"text": "Millimeter-wave (mmWave) massive multiple-input multiple-output (MIMO) seamlessly integrates two wireless technologies, mmWave communications and massive MIMO, which provides spectrums with tens of GHz of total bandwidth and supports aggressive space division multiple access using large-scale arrays. Though it is a promising solution for next-generation systems, the realization of mmWave massive MIMO faces several practical challenges. In particular, implementing massive MIMO in the digital domain requires hundreds to thousands of radio frequency chains and analog-to-digital converters matching the number of antennas. Furthermore, designing these components to operate at the mmWave frequencies is challenging and costly. These motivated the recent development of the hybrid-beamforming architecture, where MIMO signal processing is divided for separate implementation in the analog and digital domains, called the analog and digital beamforming, respectively. Analog beamforming using a phase array introduces uni-modulus constraints on the beamforming coefficients. They render the conventional MIMO techniques unsuitable and call for new designs. In this paper, we present a systematic design framework for hybrid beamforming for multi-cell multiuser massive MIMO systems over mmWave channels characterized by sparse propagation paths. The framework relies on the decomposition of analog beamforming vectors and path observation vectors into Kronecker products of factors being uni-modulus vectors. Exploiting properties of Kronecker mixed products, different factors of the analog beamformer are designed for either nulling interference paths or coherently combining data paths. Furthermore, a channel estimation scheme is designed for enabling the proposed hybrid beamforming. The scheme estimates the angles-of-arrival (AoA) of data and interference paths by analog beam scanning and data-path gains by analog beam steering. The performance of the channel estimation scheme is analyzed. In particular, the AoA spectrum resulting from beam scanning, which displays the magnitude distribution of paths over the AoA range, is derived in closed form. It is shown that the inter-cell interference level diminishes inversely with the array size, the square root of pilot sequence length, and the spatial separation between paths, suggesting different ways of tackling pilot contamination.",
"title": ""
},
{
"docid": "4ce8934f295235acc2bbf03c7530842b",
"text": "— Speech recognition has found its application on various aspects of our daily lives from automatic phone answering service to dictating text and issuing voice commands to computers. In this paper, we present the historical background and technological advances in speech recognition technology over the past few decades. More importantly, we present the steps involved in the design of a speaker-independent speech recognition system. We focus mainly on the pre-processing stage that extracts salient features of a speech signal and a technique called Dynamic Time Warping commonly used to compare the feature vectors of speech signals. These techniques are applied for recognition of isolated as well as connected words spoken. We conduct experiments on MATLAB to verify these techniques. Finally, we design a simple 'Voice-to-Text' converter application using MATLAB.",
"title": ""
},
{
"docid": "c7c40106a804061b96b6243cff85d317",
"text": "In this paper, we describe a system for detecting duplicate images and videos in a large collection of multimedia data. Our system consists of three major elements: Local-Difference-Pattern (LDP) as the unified feature to describe both images and videos, Locality-Sensitive-Hashing (LSH) as the core indexing structure to assure the most frequent data access occurred in the main memory, and multi-steps verification for queries to best exclude false positives and to increase the precision. The experimental results, validated on two public datasets, demonstrate that the proposed method is robust against the common image-processing tricks used to produce duplicates. In addition, the memory requirement has been addressed in our system to handle large-scale database.",
"title": ""
},
{
"docid": "a54f912c14b44fc458ed8de9e19a5e82",
"text": "Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.",
"title": ""
},
{
"docid": "db87ed8ad4e1ffa4f049945de80f957d",
"text": "The anterior cerebral artery (ACA) varies considerably and this complicates the description of the normal anatomy. The segmentation of the ACA is mostly agreed on by different authors, although the relationship of the pericallosal and callosomarginal arteries (CmA) is not agreed upon. The two basic configurations of the ACA are determined by the presence or absence of the CmA. The diameter, length and origin of the cortical branches have been measured and described by various authors and display great variability. Common anomalies of the ACA include the azygos, bihemispheric, and median anterior cerebral arteries. A pilot study was done on 19 hemispheres to assess the variation of the branches of the ACA. The most common variations included absence and duplication. The inferior internal parietal artery and the CmA were most commonly absent and the paracentral lobule artery was the most frequently duplicated (36.8%). The inferior internal parietal artery originated from the posterior cerebral artery in 40.0% and this was the most unusual origin observed. It is important to be aware of the possibility of variations since these variations can have serious clinical implications. The knowledge of these variations can be helpful to clinicians and neurosurgeons. The aim of this article is to review the anatomy and variations of the anterior cerebral artery, as described in the literature. This was also compared to the results from a pilot study.",
"title": ""
},
{
"docid": "26b77bf67e242ff3e88a6f6bf7137d3e",
"text": "In the recent years there has been growing interest in exploiting multibaseline (MB) SAR interferometry in a tomographic framework, to produce full 3D imaging e.g. of forest layers. However, Fourier-based MB SAR tomography is generally affected by unsatisfactory imaging quality due to a typically low number of baselines and their irregular distribution. In this work, we apply the more modern adaptive Capon spectral estimator to the vertical image reconstruction problem, using real airborne MB data. A first demonstration of possible imaging enhancement in real-world conditions is given. Keywordssynthetic aperture radar interferometry, electromagnetic tomography, forestry, spectral analysis.",
"title": ""
},
{
"docid": "6973231128048ac2ca5bce0121bf6d95",
"text": "PURPOSE\nThe aim of this study is to analyse the grip force distribution for different prosthetic hand designs and the human hand fulfilling a functional task.\n\n\nMETHOD\nA cylindrical object is held with a power grasp and the contact forces are measured at 20 defined positions. The distributions of contact forces in standard electric prostheses, in a experimental prosthesis with an adaptive grasp, and in human hands as a reference are analysed and compared. Additionally, the joint torques are calculated and compared.\n\n\nRESULTS\nContact forces of up to 24.7 N are applied by the middle and distal phalanges of the index finger, middle finger, and thumb of standard prosthetic hands, whereas forces of up to 3.8 N are measured for human hands. The maximum contact forces measured in a prosthetic hand with an adaptive grasp are 4.7 N. The joint torques of human hands and the adaptive prosthesis are comparable.\n\n\nCONCLUSIONS\nThe analysis of grip force distribution is proposed as an additional parameter to rate the performance of different prosthetic hand designs.",
"title": ""
},
{
"docid": "f03c4718a0d85917ea870a90c9bb05c5",
"text": "Conventional time-delay estimators exhibit dramatic performance degradations in the presence of multipath signals. This limits their application in reverberant enclosures, particularly when the signal of interest is speech and it may not possible to estimate and compensate for channel effects prior to time-delay estimation. This paper details an alternative approach which reformulates the problem as a linear regression of phase data and then estimates the time-delay through minimization of a robust statistical error measure. The technique is shown to be less susceptible to room reverberation effects. Simulations are performed across a range of source placements and room conditions to illustrate the utility of the proposed time-delay estimation method relative to conventional methods.",
"title": ""
},
{
"docid": "07457116fbecf8e5182459961b8a87d0",
"text": "Modeling temporal sequences plays a fundamental role in various modern applications and has drawn more and more attentions in the machine learning community. Among those efforts on improving the capability to represent temporal data, the Long Short-Term Memory (LSTM) has achieved great success in many areas. Although the LSTM can capture long-range dependency in the time domain, it does not explicitly model the pattern occurrences in the frequency domain that plays an important role in tracking and predicting data points over various time cycles. We propose the State-Frequency Memory (SFM), a novel recurrent architecture that allows to separate dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of input sequences. By jointly decomposing memorized dynamics into statefrequency components, the SFM is able to offer a fine-grained analysis of temporal sequences by capturing the dependency of uncovered patterns in both time and frequency domains. Evaluations on several temporal modeling tasks demonstrate the SFM can yield competitive performances, in particular as compared with the state-of-the-art LSTM models.",
"title": ""
},
{
"docid": "f11ff738aaf7a528302e6ec5ed99c43c",
"text": "Vehicles equipped with GPS localizers are an important sensory device for examining people’s movements and activities. Taxis equipped with GPS localizers serve the transportation needs of a large number of people driven by diverse needs; their traces can tell us where passengers were picked up and dropped off, which route was taken, and what steps the driver took to find a new passenger. In this article, we provide an exhaustive survey of the work on mining these traces. We first provide a formalization of the data sets, along with an overview of different mechanisms for preprocessing the data. We then classify the existing work into three main categories: social dynamics, traffic dynamics and operational dynamics. Social dynamics refers to the study of the collective behaviour of a city’s population, based on their observed movements; Traffic dynamics studies the resulting flow of the movement through the road network; Operational dynamics refers to the study and analysis of taxi driver’s modus operandi. We discuss the different problems currently being researched, the various approaches proposed, and suggest new avenues of research. Finally, we present a historical overview of the research work in this field and discuss which areas hold most promise for future research.",
"title": ""
},
{
"docid": "0d13be9f5e2082af96c370d3c316204f",
"text": "We present a combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users' heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.",
"title": ""
},
{
"docid": "a11d7186eb2c04477d4355cf8f91b4f2",
"text": "This study reports the results of a meta-analysis of empirical studies on Internet addiction published in academic journals for the period 1996-2006. The analysis showed that previous studies have utilized inconsistent criteria to define Internet addicts, applied recruiting methods that may cause serious sampling bias, and examined data using primarily exploratory rather than confirmatory data analysis techniques to investigate the degree of association rather than causal relationships among variables. Recommendations are provided on how researchers can strengthen this growing field of research.",
"title": ""
},
{
"docid": "b9d25bdbb337a9d16a24fa731b6b479d",
"text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.",
"title": ""
},
{
"docid": "17ee960777b02a910cf8fcc80f74d5cc",
"text": "The periosteum is a thin layer of connective tissue that covers the outer surface of a bone in all places except at joints (which are protected by articular cartilage). As opposed to bone itself, it has nociceptive nerve endings, making it very sensitive to manipulation. It also provides nourishment in the form of blood supply to the bone. The periosteum is connected to the bone by strong collagenous fibres called Sharpey's fibres, which extend to the outer circumferential and interstitial lamellae of bone. The periosteum consists of an outer \"fibrous layer\" and inner \"cambium layer\". The fibrous layer contains fibroblasts while the cambium layer contains progenitor cells which develop into osteoblasts that are responsible for increasing bone width. After a bone fracture the progenitor cells develop into osteoblasts and chondroblasts which are essential to the healing process. This review discusses the anatomy, histology and molecular biology of the periosteum in detail.",
"title": ""
},
{
"docid": "1b2fcf85bc73f3249d8685e0063aaa3a",
"text": "In our present society, the cinema has become one of the major forms of entertainment providing unlimited contexts of emotion elicitation for the emotional needs of human beings. Since emotions are universal and shape all aspects of our interpersonal and intellectual experience, they have proved to be a highly multidisciplinary research field, ranging from psychology, sociology, neuroscience, etc., to computer science. However, affective multimedia content analysis work from the computer science community benefits but little from the progress achieved in other research fields. In this paper, a multidisciplinary state-of-the-art for affective movie content analysis is given, in order to promote and encourage exchanges between researchers from a very wide range of fields. In contrast to other state-of-the-art papers on affective video content analysis, this work confronts the ideas and models of psychology, sociology, neuroscience, and computer science. The concepts of aesthetic emotions and emotion induction, as well as the different representations of emotions are introduced, based on psychological and sociological theories. Previous global and continuous affective video content analysis work, including video emotion recognition and violence detection, are also presented in order to point out the limitations of affective video content analysis work.",
"title": ""
},
{
"docid": "d88e4d9bba66581be16c9bd59d852a66",
"text": "After five decades characterized by empiricism and several pitfalls, some of the basic mechanisms of action of ozone in pulmonary toxicology and in medicine have been clarified. The present knowledge allows to understand the prolonged inhalation of ozone can be very deleterious first for the lungs and successively for the whole organism. On the other hand, a small ozone dose well calibrated against the potent antioxidant capacity of blood can trigger several useful biochemical mechanisms and reactivate the antioxidant system. In detail, firstly ex vivo and second during the infusion of ozonated blood into the donor, the ozone therapy approach involves blood cells and the endothelium, which by transferring the ozone messengers to billions of cells will generate a therapeutic effect. Thus, in spite of a common prejudice, single ozone doses can be therapeutically used in selected human diseases without any toxicity or side effects. Moreover, the versatility and amplitude of beneficial effect of ozone applications have become evident in orthopedics, cutaneous, and mucosal infections as well as in dentistry.",
"title": ""
},
{
"docid": "90b1d5b2269f742f9028199c34501043",
"text": "Motivated by the desire to construct compact (in terms of expected length to be traversed to reach a decision) decision trees, we propose a new node splitting measure for decision tree construction. We show that the proposed measure is convex and cumulative and utilize this in the construction of decision trees for classification. Results obtained from several datasets from the UCI repository show that the proposed measure results in decision trees that are more compact with classification accuracy that is comparable to that obtained using popular node splitting measures such as Gain Ratio and the Gini Index. 2008 Published by Elsevier Inc.",
"title": ""
}
] |
scidocsrr
|
af2b7f450e116395700b6414dd427aff
|
DroidBot: A Lightweight UI-Guided Test Input Generator for Android
|
[
{
"docid": "fce11219cdd4d85dde1d3d893f252e14",
"text": "Smartphones and tablets with rich graphical user interfaces (GUI) are becoming increasingly popular. Hundreds of thousands of specialized applications, called apps, are available for such mobile platforms. Manual testing is the most popular technique for testing graphical user interfaces of such apps. Manual testing is often tedious and error-prone. In this paper, we propose an automated technique, called Swift-Hand, for generating sequences of test inputs for Android apps. The technique uses machine learning to learn a model of the app during testing, uses the learned model to generate user inputs that visit unexplored states of the app, and uses the execution of the app on the generated inputs to refine the model. A key feature of the testing algorithm is that it avoids restarting the app, which is a significantly more expensive operation than executing the app on a sequence of inputs. An important insight behind our testing algorithm is that we do not need to learn a precise model of an app, which is often computationally intensive, if our goal is to simply guide test execution into unexplored parts of the state space. We have implemented our testing algorithm in a publicly available tool for Android apps written in Java. Our experimental results show that we can achieve significantly better coverage than traditional random testing and L*-based testing in a given time budget. Our algorithm also reaches peak coverage faster than both random and L*-based testing.",
"title": ""
}
] |
[
{
"docid": "266114ecdd54ce1c5d5d0ec42c04ed4d",
"text": "A multiscale image registration technique is presented for the registration of medical images that contain significant levels of noise. An overview of the medical image registration problem is presented, and various registration techniques are discussed. Experiments using mean squares, normalized correlation, and mutual information optimal linear registration are presented that determine the noise levels at which registration using these techniques fails. Further experiments in which classical denoising algorithms are applied prior to registration are presented, and it is shown that registration fails in this case for significantly high levels of noise, as well. The hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [20] is presented, and accurate registration of noisy images is achieved by obtaining a hierarchical multiscale decomposition of the images and registering the resulting components. This approach enables successful registration of images that contain noise levels well beyond the level at which ordinary optimal linear registration fails. Image registration experiments demonstrate the accuracy and efficiency of the multiscale registration technique, and for all noise levels, the multiscale technique is as accurate as or more accurate than ordinary registration techniques.",
"title": ""
},
{
"docid": "f4440f6c069854c73fbc90d1d921fd7c",
"text": "In this paper we present Geckos, a new type of tangible objects which are tracked using a Force-Sensitive Resistance sensor. Geckos are based on low-cost permanent magnets and can also be used on non-horizontal surfaces. Unique pressure footprints are used to identify each tangible Gecko. Two types of tangible object designs are presented: Using a single magnet in combination with felt pads provides new pressure-based interaction modalities. Using multiple separate magnets it is possible to change the marker footprint dynamically and create new haptic experiences. The tangible object design and interaction are illustrated with example applications. We also give details on the feasibility and benefits of our tracking approach and show compatibility with other tracking technologies.",
"title": ""
},
{
"docid": "aed8a983fc25d2c1c71401b338d8f5f3",
"text": "Heart disease is the leading cause of death in the world over the past 10 years. Researchers have been using several data mining techniques to help health care professionals in the diagnosis of heart disease. Decision Tree is one of the successful data mining techniques used. However, most research has applied J4.8 Decision Tree, based on Gain Ratio and binary discretization. Gini Index and Information Gain are two other successful types of Decision Trees that are less used in the diagnosis of heart disease. Also other discretization techniques, voting method, and reduced error pruning are known to produce more accurate Decision Trees. This research investigates applying a range of techniques to different types of Decision Trees seeking better performance in heart disease diagnosis. A widely used benchmark data set is used in this research. To evaluate the performance of the alternative Decision Trees the sensitivity, specificity, and accuracy are calculated. The research proposes a model that outperforms J4.8 Decision Tree and Bagging algorithm in the diagnosis of heart disease patients.",
"title": ""
},
{
"docid": "74686e9acab0a4d41c87cadd7da01889",
"text": "Automatic analysis of biomedical time series such as electroencephalogram (EEG) and electrocardiographic (ECG) signals has attracted great interest in the community of biomedical engineering due to its important applications in medicine. In this work, a simple yet effective bag-of-words representation that is able to capture both local and global structure similarity information is proposed for biomedical time series representation. In particular, similar to the bag-of-words model used in text document domain, the proposed method treats a time series as a text document and extracts local segments from the time series as words. The biomedical time series is then represented as a histogram of codewords, each entry of which is the count of a codeword appeared in the time series. Although the temporal order of the local segments is ignored, the bag-of-words representation is able to capture high-level structural information because both local and global structural information are well utilized. The performance of the bag-of-words model is validated on three datasets extracted from real EEG and ECG signals. The experimental results demonstrate that the proposed method is not only insensitive to parameters of the bag-of-words model such as local segment length and codebook size, but also robust to noise.",
"title": ""
},
{
"docid": "a90c56a22559807463b46d1c7ab36cb3",
"text": "We have studied manual motor function in a man deafferented by a severe peripheral sensory neuropathy. Motor power was almost unaffected. Our patients could produce a very wide range of preprogrammed finger movements with remarkable accuracy, involving complex muscle synergies of the hand and forearm muscles. He could perform individual finger movements and outline figures in the air with high eyes closed. He had normal pre- and postmovement EEG potentials, and showed the normal bi/triphasic pattern of muscle activation in agonist and antagonist muscles during fast limb movements. He could also move his thumb accurately through three different distances at three different speeds, and could produce three different levels of force at his thumb pad when required. Although he could not judge the weights of objects placed in his hands without vision, he was able to match forces applied by the experimenter to the pad of each thumb if he was given a minimal indication of thumb movement. Despite his success with these laboratory tasks, his hands were relatively useless to him in daily life. He was unable to grasp a pen and write, to fasten his shirt buttons or to hold a cup in one hand. Part of hist difficulty lay in the absence of any automatic reflex correction in his voluntary movements, and also to an inability to sustain constant levels of muscle contraction without visual feedback over periods of more than one or two seconds. He was also unable to maintain long sequences of simple motor programmes without vision.",
"title": ""
},
{
"docid": "966d650d8d186715dd1ee08effedce92",
"text": "Over the past few years, various tasks involving videos such as classification, description, summarization and question answering have received a lot of attention. Current models for these tasks compute an encoding of the video by treating it as a sequence of images and going over every image in the sequence, which becomes computationally expensive for longer videos. In this paper, we focus on the task of video classification and aim to reduce the computational cost by using the idea of distillation. Specifically, we propose a Teacher-Student network wherein the teacher looks at all the frames in the video but the student looks at only a small fraction of the frames in the video. The idea is to then train the student to minimize (i) the difference between the final representation computed by the student and the teacher and/or (ii) the difference between the distributions predicted by the teacher and the student. This smaller student network which involves fewer computations but still learns to mimic the teacher can then be employed at inference time for video classification. We experiment with the YouTube-8M dataset and show that the proposed student network can reduce the inference time by upto 30% with a negligent drop in the performance.",
"title": ""
},
{
"docid": "70b410094dd718d10e6ae8cd3f93c768",
"text": "Software developers and project managers are struggling to assess the appropriateness of agile processes to their development environments. This paper identifies limitations that apply to many of the published agile processes in terms of the types of projects in which their application may be problematic. INTRODUCTION As more organizations seek to gain competitive advantage through timely deployment of Internet-based services, developers are under increasing pressure to produce new or enhanced implementations quickly [2,8]. Agile software development processes were developed primarily to address this problem, that is, the problem of developing software in \"Internet time\". Agile approaches utilize technical and managerial processes that continuously adapt and adjust to (1) changes derived from experiences gained during development, (2) changes in software requirements and (3) changes in the development environment. Agile processes are intended to support early and quick production of working code. This is accomplished by structuring the development process into iterations, where an iteration focuses on delivering working code and other artifacts that provide value to the customer and, secondarily, to the project. Agile process proponents and critics often emphasize the code focus of these processes. Proponents often argue that code is the only deliverable that matters, and marginalize the role of analysis and design models and documentation in software creation and evolution. Agile process critics point out that the emphasis on code can lead to corporate memory loss because there is little emphasis on producing good documentation and models to support software creation and evolution of large, complex systems. The claims made by agile process proponents and critics lead to questions about what practices, techniques, and infrastructures are suitable for software development in today’s rapidly changing development environments. In particular, answers to questions related to the suitability of agile processes to particular application domains and development environments are often based on anecdotal accounts of experiences. In this paper we present what we perceive as limitations of agile processes based on our analysis of published works on agile processes [14]. Processes that name themselves “agile” vary greatly in values, practices, and application domains. It is therefore difficult to assess agile processes in general and identify limitations that apply to all agile processes. Our analysis [14] is based on a study of assumptions underlying Extreme Programming (XP) [3,5,6,10], Scrum [12,13], Agile Unified Process [11], Agile Modeling [1] and the principles stated by the Agile Alliance. It is mainly an analytical study, supported by experiences on a few XP projects conducted by the authors. THE AGILE ALLIANCE In recent years a number of processes claiming to be \"agile\" have been proposed in the literature. To avoid confusion over what it means for a process to be \"agile\", seventeen agile process methodologists came to an agreement on what \"agility\" means during a 2001 meeting where they discussed future trends in software development processes. One result of the meeting was the formation of the \"Agile Alliance\" and the publication of its manifesto (see http://www.agilealliance.org/principles.html). The manifesto of the \"Agile Alliance\" is a condensed definition of the values and goals of \"Agile Software Development\". This manifesto is detailed through a number of common principles for agile processes. The principles are listed below. 1. \"Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.\" 2. \"Business people and developers must work together daily throughout the project.\" 3. \"Welcome changing requirements, even late in development.\" 4. \"Deliver working software frequently.\" 5. \"Working software is the primary measure of progress.\" 6. \"Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.\" 7. \"The best architectures, requirements, and designs emerge from self-organizing teams.\" 8. \"The most efficient and effective method of conveying information to and within a development team is face-toface conversation.\" 9. \"Agile processes promote sustainable development.\" 10. \"Continuous attention to technical excellence and good design enhances agility.\" 11. \"Simplicity is essential.\" 12. \"Project teams evaluate their effectiveness at regular intervals and adjust their behavior accordingly.\" [TFR02] D. Turk, R. France, B. Rumpe. Limitations of Agile Software Processes. In: Third International Conference on Extreme Programming and Flexible Processes in Software Engineering, XP2002, May 26-30, Alghero, Italy, pg. 43-46, 2002. www.se-rwth.de/publications AN ANALYSIS OF AGILE PROCESSES In this section we discuss the limitations of agile processes that we have identified, based on our analysis of the Agile Alliance principles and assumptions underlying agile processes. The next subsection lists the managerial and technical assumptions we identified in our study [14], and the following subsection discusses the limitations derived from the assumptions. Underlying Assumptions The stated benefits of agile processes over traditional prescriptive processes are predicated on the validity of these assumptions. These assumptions are discussed in more details in another paper [14]. Assumption 1: Customers are co-located with the development team and are readily available when needed by developers. Furthermore, the reliance on face-to-face communication requires that developers be located in close proximity to each other. Assumption 2: Documentation and software models do not play central roles in software development. Assumption 3: Software requirements and the environment in which software is developed evolve as the software is being developed. Assumption 4: Development processes that are dynamically adapted to changing project and product characteristics are more likely to produce high-quality products. Assumption 5: Developers have the experience needed to define and adapt their processes appropriately. In other words, an organization can form teams consisting of bright, highly-experienced problem solvers capable of effectively evolving their processes while they are being executed. Assumption 6: Project visibility can be achieved primarily through delivery of increments and a few metrics. Assumption 7: Rigorous evaluation of software artifacts (products and processes) can be restricted to frequent informal reviews and code testing. Assumption 8: Reusability and generality should not be goals of application-specific software development. Assumption 9: Cost of change does not dramatically increase over time. Assumption 10: Software can be developed in increments. Assumption 11: There is no need to design for change because any change can be effectively handled by refactoring the code [9]. Limitations of Agile Processes The assumptions listed above do not hold for all software development environments in general, nor for all “agile” processes in particular. This should not be surprising; none of the agile processes is a silver bullet (despite the enthusiastic claims of some its proponents). In this part we describe some of the situations in which agile processes may generally not be applicable. It is possible that some agile processes fit these assumptions better, while others may be able to be extended to address the limitations discussed here. Such extensions can involve incorporating principles and practices often associated with more predictive development practices into agile processes. 1. Limited support for distributed development",
"title": ""
},
{
"docid": "bd1cc759e636f8bf6828e758c27a0ca5",
"text": "Although personalised nutrition is frequently considered in the context of diet-gene interactions, increasingly, personalised nutrition is seen to exist at three levels. The first is personalised dietary advice using Internet-delivered services, which ultimately will become automated and which will also draw on mobile phone technology. The second level of personalised dietary advice will include phenotypic information on anthropometry, physical activity, clinical parameters and biochemical markers of nutritional status. It remains possible that in addition to personalised dietary advice based on phenotypic data, advice at that group or metabotype level may be offered where metabotypes are defined by a common metabolic profile. The third level of personalised nutrition will involve the use of genomic data. While the genomic aspect of personalised nutrition is often considered as its main driver, there are significant challenges to translation of data on SNP and diet into personalised advice. The majority of the published data on SNP and diet emanate from observational studies and as such do not offer any cause-effect associations. To achieve this, purpose-designed dietary intervention studies will be needed with subjects recruited according to their genotype. Extensive research indicates that consumers would welcome personalised dietary advice including dietary advice based on their genotype. Unlike personalised medicine where genotype data are linked to the risk of developing a disease, in personalised nutrition the genetic data relate to the optimal diet for a given genotype to reduce disease risk factors and thus there are few ethical and legal issues in personalised nutrition.",
"title": ""
},
{
"docid": "f56f2119b3e65970db35676fe1cac9ba",
"text": "While behavioral and social sciences occupations comprise one of the largest portions of the \"STEM\" workforce, most studies of diversity in STEM overlook this population, focusing instead on fields such as biomedical or physical sciences. This study evaluates major demographic trends and productivity in the behavioral and social sciences research (BSSR) workforce in the United States during the past decade. Our analysis shows that the demographic trends for different BSSR fields vary. In terms of gender balance, there is no single trend across all BSSR fields; rather, the problems are field-specific, and disciplines such as economics and political science continue to have more men than women. We also show that all BSSR fields suffer from a lack of racial and ethnic diversity. The BSSR workforce is, in fact, less representative of racial and ethnic minorities than are biomedical sciences or engineering. Moreover, in many BSSR subfields, minorities are less likely to receive funding. We point to various funding distribution patterns across different demographic groups of BSSR scientists, and discuss several policy implications.",
"title": ""
},
{
"docid": "dd726458660c3dfe05bd775df562e188",
"text": "Maternally deprived rats were treated with tianeptine (15 mg/kg) once a day for 14 days during their adult phase. Their behavior was then assessed using the forced swimming and open field tests. The BDNF, NGF and energy metabolism were assessed in the rat brain. Deprived rats increased the immobility time, but tianeptine reversed this effect and increased the swimming time; the BDNF levels were decreased in the amygdala of the deprived rats treated with saline and the BDNF levels were decreased in the nucleus accumbens within all groups; the NGF was found to have decreased in the hippocampus, amygdala and nucleus accumbens of the deprived rats; citrate synthase was increased in the hippocampus of non-deprived rats treated with tianeptine and the creatine kinase was decreased in the hippocampus and amygdala of the deprived rats; the mitochondrial complex I and II–III were inhibited, and tianeptine increased the mitochondrial complex II and IV in the hippocampus of the non-deprived rats; the succinate dehydrogenase was increased in the hippocampus of non-deprived rats treated with tianeptine. So, tianeptine showed antidepressant effects conducted on maternally deprived rats, and this can be attributed to its action on the neurochemical pathways related to depression.",
"title": ""
},
{
"docid": "19259e0b88e1f5bfbde873886f832e43",
"text": "Molecular biologists routinely clone genetic constructs from DNA segments and formulate plans to assemble them. However, manual assembly planning is complex, error prone and not scalable. We address this problem with an algorithm-driven DNA assembly planning software tool suite called Raven (http://www.ravencad.org/) that produces optimized assembly plans and allows users to apply experimental outcomes to redesign assembly plans interactively. We used Raven to calculate assembly plans for thousands of variants of five types of genetic constructs, as well as hundreds of constructs of variable size and complexity from the literature. Finally, we experimentally validated a subset of these assembly plans by reconstructing four recombinase-based 'genetic counter' constructs and two 'repressilator' constructs. We demonstrate that Raven's solutions are significantly better than unoptimized solutions at small and large scales and that Raven's assembly instructions are experimentally valid.",
"title": ""
},
{
"docid": "66f1279585c6d1a0a388faa91bd25c62",
"text": "Our research project is to design a readout IC for an ultrasonic transducer consisting of a matrix of more than 2000 elements. The IC and the matrix transducer will be put into the tip of a transesophageal probe for 3D echocardiography. A key building block of the readout IC, a programmable analog delay line, is presented in this paper. It is based on the time-interleaved sample-and-hold (S/H) principle. Compared with conventional analog delay lines, this design is simple, accurate and flexible. A prototype has been fabricated in a standard 0.35µm CMOS technology. Measurement results showing its functionality are presented.",
"title": ""
},
{
"docid": "6a763e49cdfd41b28922eb536d9404ed",
"text": "With recent advances in computer vision and graphics, it is now possible to generate videos with extremely realistic synthetic faces, even in real time. Countless applications are possible, some of which raise a legitimate alarm, calling for reliable detectors of fake videos. In fact, distinguishing between original and manipulated video can be a challenge for humans and computers alike, especially when the videos are compressed or have low resolution, as it often happens on social networks. Research on the detection of face manipulations has been seriously hampered by the lack of adequate datasets. To this end, we introduce a novel face manipulation dataset of about half a million edited images (from over 1000 videos). The manipulations have been generated with a state-of-the-art face editing approach. It exceeds all existing video manipulation datasets by at least an order of magnitude. Using our new dataset, we introduce benchmarks for classical image forensic tasks, including classification and segmentation, considering videos compressed at various quality levels. In addition, we introduce a benchmark evaluation for creating indistinguishable forgeries with known ground truth; for instance with generative refinement models.",
"title": ""
},
{
"docid": "8aa3007d5a14c63dc035b11f2df0793b",
"text": "To detect the smallest delay faults at a fault site, the longest path(s) through it must be tested at full speed. Most existing test generation tools are either inefficient in automatically identifying the longest testable paths due to the high computational complexity or do not support at-speed test using existing practical design-for-testability structures, such as scan design. In this work a test generation methodology for scan-based synchronous sequential circuits is presented, under two at-speed test strategies used in industry. The two strategies are compared and the test generation efficiency is evaluated on the ISCAS89 benchmark circuits.",
"title": ""
},
{
"docid": "7b13637b634b11b3061f7ebe0c64b3a6",
"text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.",
"title": ""
},
{
"docid": "1e9c7c97256e7778dbb1ef4f09c1b28e",
"text": "A new neural paradigm called diagonal recurrent neural network (DRNN) is presented. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer, and the hidden layer comprises self-recurrent neurons. Two DRNN's are utilized in a control system, one as an identifier called diagonal recurrent neuroidentifier (DRNI) and the other as a controller called diagonal recurrent neurocontroller (DRNC). A controlled plant is identified by the DRNI, which then provides the sensitivity information of the plant to the DRNC. A generalized dynamic backpropagation algorithm (DBP) is developed and used to train both DRNC and DRNI. Due to the recurrence, the DRNN can capture the dynamic behavior of a system. To guarantee convergence and for faster learning, an approach that uses adaptive learning rates is developed by introducing a Lyapunov function. Convergence theorems for the adaptive backpropagation algorithms are developed for both DRNI and DRNC. The proposed DRNN paradigm is applied to numerical problems and the simulation results are included.",
"title": ""
},
{
"docid": "436a250dc621d58d70bee13fd3595f06",
"text": "The solid-state transformer allows add-on intelligence to enhance power quality compatibility between source and load. It is desired to demonstrate the benefits gained by the use of such a device. Recent advancement in semiconductor devices and converter topologies facilitated a newly proposed intelligent universal transformer (IUT), which can isolate a disturbance from either source or load. This paper describes the basic circuit and the operating principle for the multilevel converter based IUT and its applications for medium voltages. Various power quality enhancement features are demonstrated with computer simulation for a complete IUT circuit.",
"title": ""
},
{
"docid": "18233af1857390bff51d2e713bc766d9",
"text": "Name disambiguation is a perennial challenge for any large and growing dataset but is particularly significant for scientific publication data where documents and ideas are linked through citations and depend on highly accurate authorship. Differentiating personal names in scientific publications is a substantial problem as many names are not sufficiently distinct due to the large number of researchers active in most academic disciplines today. As more and more documents and citations are published every year, any system built on this data must be continually retrained and reclassified to remain relevant and helpful. Recently, some incremental learning solutions have been proposed, but most of these have been limited to small-scale simulations and do not exhibit the full heterogeneity of the millions of authors and papers in real world data. In our work, we propose a probabilistic model that simultaneously uses a rich set of metadata and reduces the amount of pairwise comparisons needed for new articles. We suggest an approach to disambiguation that classifies in an incremental fashion to alleviate the need for retraining the model and re-clustering all papers and uses fewer parameters than other algorithms. Using a published dataset, we obtained the highest K-measure which is a geometric mean of cluster and author-class purity. Moreover, on a difficult author block from the Clarivate Analytics Web of Science, we obtain higher precision than other algorithms.",
"title": ""
},
{
"docid": "5d92f58e929a851097eae320eb9c3ddc",
"text": "In recent years, the study of genomic alterations and protein expression involved in the pathways of breast cancer carcinogenesis has provided an increasing number of targets for drugs development in the setting of metastatic breast cancer (i.e., trastuzumab, everolimus, palbociclib, etc.) significantly improving the prognosis of this disease. These drugs target specific molecular abnormalities that confer a survival advantage to cancer cells. On these bases, emerging evidence from clinical trials provided increasing proof that the genetic landscape of any tumor may dictate its sensitivity or resistance profile to specific agents and some studies have already showed that tumors treated with therapies matched with their molecular alterations obtain higher objective response rates and longer survival. Predictive molecular biomarkers may optimize the selection of effective therapies, thus reducing treatment costs and side effects. This review offers an overview of the main molecular pathways involved in breast carcinogenesis, the targeted therapies developed to inhibit these pathways, the principal mechanisms of resistance and, finally, the molecular biomarkers that, to date, are demonstrated in clinical trials to predict response/resistance to targeted treatments in metastatic breast cancer.",
"title": ""
}
] |
scidocsrr
|
ce59b445dc3920118a570526fca01920
|
Designing for the Safety of Pedestrians , Cyclists , and Motorists in Urban Environments
|
[
{
"docid": "99aaea5ec8f90994a9fa01bfc0131ee2",
"text": "Beyond simply acting as thoroughfares for motor vehicles, urban streets often double as public spaces. Urban streets are places where people walk, shop, meet, and generally engage in the diverse array of social and recreational activities that, for many, are what makes urban living enjoyable. And beyond even these quality-of-life benefits, pedestrian-friendly urban streets have been increasingly linked to a host of highly desirable social outcomes, including economic growth and innovation (Florida, ), improvements in air quality (Frank et al., ), and increased physical fitness and health (Frank et al., ), to name only a few. For these reasons, many groups and individuals encourage the design of “livable” streets, or streets that seek to better integrate the needs of pedestrians and local developmental objectives into a roadway’s design. There has been a great deal of work describing the characteristics of livable streets (see Duany et al., ; Ewing, ; Jacobs, ), and there is general consensus on their characteristics: livable streets, at a minimum, seek to enhance the pedestrian character of the street by providing a continuous sidewalk network and incorporating design features that minimize the negative impacts of motor vehicle use on pedestrians. Of particular importance is the role played by roadside features such as street trees and on-street parking, which serve to buffer the pedestrian realm from potentially hazardous oncoming traffic, and to provide spatial definition to the public right-of-way. Indeed, many livability advocates assert that trees, as much as any other single feature, can play a central role in enhancing a roadway’s livability (Duany et al., ; Jacobs, ). While most would agree that the inclusion of trees and other streetscape features enhances the aesthetic quality of a roadway, there is substantive disagreement about their safety effects (see Figure ). Conventional engineering practice encourages the design of roadsides that will allow a vehicle leaving the travelway to safely recover before encountering a potentially hazardous fixed object. When one considers the aggregate statistics on run-off-roadway crashes, there is indeed ",
"title": ""
}
] |
[
{
"docid": "4b494016220eb5442642e34c3ed2d720",
"text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.",
"title": ""
},
{
"docid": "cc85e917ca668a60461ba6848e4c3b42",
"text": "In this paper a generic method for fault detection and isolation (FDI) in manufacturing systems considered as discrete event systems (DES) is presented. The method uses an identified model of the closed loop of plant and controller built on the basis of observed fault free system behavior. An identification algorithm known from literature is used to determine the fault detection model in form of a non-deterministic automaton. New results of how to parameterize this algorithm are reported. To assess the fault detection capability of an identified automaton, probabilistic measures are proposed. For fault isolation, the concept of residuals adapted for DES is used by defining appropriate set operations representing generic fault symptoms. The method is applied to a case study system.",
"title": ""
},
{
"docid": "5561d77226a21b2f2f4598709f36b8b2",
"text": "Presence of crystals in urine sediment is not an infrequent finding, and its observation does not have any especial clinical signification in most situations. Some factors as pH, temperature, and emission time of the urine may play a part in their formation. Nevertheless, in some diseases such as congenital anomalies or severe liver pathology, some types of crystals (cystine, tyrosine, and leucine) are quite rare and its identification is very important. In urinary cytopathology, crystals such as uric acid, calcium oxalate dihydrate, and ammonium magnesium phosphate (struvite) are among those that may be frequently observed in urine sediment. Taking into account its morphology, they are usually named as ‘‘lemons’’ (uric acid), ‘‘envelopes’’ (calcium oxalate dihydrate), and ‘‘coffin linds’’ (struvite). Here, a type of crystal (calcium oxalate monohydrate), not as frequent as those mentioned earlier is showed (Fig. C-1). The image belongs to a voided urine smear from a patient suffering acute renal failure. Calcium oxalate monohydrate crystals vary in size and, depending on their position, they give a different projection showing the typical ‘‘dumbbell’’ shape or round–oval morphology with a central depression (they are also described as ‘‘hour-glass\" or ‘‘sheaf’’). Factors such as excessive consumption of certain aliments rich in oxalates (spinaches, asparagus, tomatoes, etc), a decrease in the urinary pH, and a scant diuresis may contribute to its development. The presence of calcium oxalate crystals (mono or dihydrate) in the urine sediment does not mean necessary the existence of calculi in the urinary tract. In fresh and unstained urine samples, and depending of their size, this type of crystals must be differentiated from erythrocytes, yeast, and parasitic ova, especially Enterobius vermicularis. However, its high refraction index can solve the problem.",
"title": ""
},
{
"docid": "f14e128c17a95e8f549f822dad408133",
"text": "Capparis Spinosa L. is an aromatic plant growing wild in dry regions around the Mediterranean basin. Capparis Spinosa was shown to possess several properties such as antioxidant, antifungal, and anti-hepatotoxic actions. In this work, we aimed to evaluate immunomodulatory properties of Capparis Spinosa leaf extracts in vitro on human peripheral blood mononuclear cells (PBMCs) from healthy individuals. Using MTT assay, we identified a range of Capparis Spinosa doses, which were not toxic. Unexpectedly, we found out that Capparis Spinosa aqueous fraction exhibited an increase in cell metabolic activity, even though similar doses did not affect cell proliferation as shown by CFSE. Interestingly, Capparis Spinosa aqueous fraction appeared to induce an overall anti-inflammatory response through significant inhibition of IL-17 and induction of IL-4 gene expression when PBMCs were treated with the non toxic doses of 100 and/or 500 μg/ml. Phytoscreening analysis of the used Capparis Spinosa preparations showed that these contain tannins; sterols, alkaloids; polyphenols and flavonoids. Surprisingly, quantification assays showed that our Capparis Spinosa preparation contains low amounts of polyphenols relative to Capparis Spinosa used in other studies. This Capparis Spinosa also appeared to act as a weaker scavenging free radical agent as evidenced by DPPH radical scavenging test. Finally, polyphenolic compounds including catechin, caffeic acid, syringic acid, rutin and ferulic acid were identified by HPLC, in the Capparis spinosa preparation. Altogether, these findings suggest that our Capparis Spinosa preparation contains interesting compounds, which could be used to suppress IL-17 and to enhance IL-4 gene expression in certain inflammatory situations. Other studies are underway in order to identify the compound(s) underlying this effect.",
"title": ""
},
{
"docid": "4f6f441129aa47b09984f893b910035f",
"text": "Hydroxycinnamic acids (such as ferulic, caffeic, sinapic, and p-coumaric acids) are a group of compounds highly abundant in food that may account for about one-third of the phenolic compounds in our diet. Hydroxycinnamic acids have gained an increasing interest in health because they are known to be potent antioxidants. These compounds have been described as chain-breaking antioxidants acting through radical scavenging activity, that is related to their hydrogen or electron donating capacity and to the ability to delocalize/stabilize the resulting phenoxyl radical within their structure. The free radical scavenger ability of antioxidants can be predicted from standard one-electron potentials. Thus, voltammetric methods have often been applied to characterize a diversity of natural and synthetic antioxidants essentially to get an insight into their mechanism and also as an important tool for the rational design of new and potent antioxidants. The structure-property-activity relationships (SPARs) correlations already established for this type of compounds suggest that redox potentials could be considered a good measure of antioxidant activity and an accurate guideline on the drug discovery and development process. Due to its magnitude in the antioxidant field, the electrochemistry of hydroxycinnamic acid-based antioxidants is reviewed highlighting the structure-property-activity relationships (SPARs) obtained so far.",
"title": ""
},
{
"docid": "a39b83010f5c4094bc7636fd550a71bd",
"text": "Trend following (TF) is trading philosophy by which buying/selling decisions are made solely according to the observed market trend. For many years, many manifestations of TF such as a software program called Turtle Trader, for example, emerged in the industry. Surprisingly little has been studied in academic research about its algorithms and applications. Unlike financial forecasting, TF does not predict any market movement; instead it identifies a trend at early time of the day, and trades automatically afterwards by a pre-defined strategy regardless of the moving market directions during run time. Trend following trading has been popular among speculators. However it remains as a trading method where human judgment is applied in setting the rules (aka the strategy) manually. Subsequently the TF strategy is executed in pure objective operational manner. Finding the correct strategy at the beginning is crucial in TF. This usually involves human intervention in first identifying a trend, and configuring when to place an order and close it out, when certain conditions are met. In this paper, we evaluated and compared a collection of TF algorithms that can be programmed in a computer system for automated trading. In particular, a new version of TF called trend recalling model is presented. It works by partially matching the current market trend with one of the proven successful patterns from the past. Our experiments based on real stock market data show that this method has an edge over the other trend following methods in profitability. The results show that TF however is still limited by market fluctuation (volatility), and the ability to identify trend signal. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "3726f6ddd4166c431f0847cdf23eb415",
"text": "We introduce an approach that leverages surface normal predictions, along with appearance cues, to retrieve 3D models for objects depicted in 2D still images from a large CAD object library. Critical to the success of our approach is the ability to recover accurate surface normals for objects in the depicted scene. We introduce a skip-network model built on the pre-trained Oxford VGG convolutional neural network (CNN) for surface normal prediction. Our model achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface normal prediction, and recovers fine object detail compared to previous methods. Furthermore, we develop a two-stream network over the input image and predicted surface normals that jointly learns pose and style for CAD model retrieval. When using the predicted surface normals, our two-stream network matches prior work using surface normals computed from RGB-D images on the task of pose prediction, and achieves state of the art when using RGB-D input. Finally, our two-stream network allows us to retrieve CAD models that better match the style and pose of a depicted object compared with baseline approaches.",
"title": ""
},
{
"docid": "54e5cd296371e7e058a00b1835251242",
"text": "In this paper, a quasi-millimeter-wave wideband bandpass filter (BPF) is designed by using a microstrip dual-mode ring resonator and two folded half-wavelength resonators. Based on the transmission line equivalent circuit of the filter, variations of the frequency response of the filter versus the circuit parameters are investigated first by using the derived formulas and circuit simulators. Then a BPF with a 3dB fractional bandwidth (FBW) of 20% at 25.5 GHz is designed, which realizes the desired wide passband, sharp skirt property, and very wide stopband. Finally, the designed BPF is fabricated, and its measured frequency response is found agree well with the simulated result.",
"title": ""
},
{
"docid": "b69f2c426f86ad0e07172eb4d018b818",
"text": "Versatile motor skills for hitting and throwing motions can be observed in humans already in early ages. Future robots require high power-to-weight ratios as well as inherent long operational lifetimes without breakage in order to achieve similar perfection. Robustness due to passive compliance and high-speed catapult-like motions as possible with fast energy release are further beneficial characteristics. Such properties can be realized with antagonistic muscle-based designs. Additionally, control algorithms need to exploit the full potential of the robot. Learning control is a promising direction due to its the potential to capture uncertainty and control of complex systems. The aim of this paper is to build a robotic arm that is capable of generating high accelerations and sophisticated trajectories as well as enable exploration at such speeds for robot learning approaches. Hence, we have designed a light-weight robot arm with moving masses below 700 g with powerful antagonistic compliant actuation with pneumatic artificial muscles. Rather than recreating human anatomy, our system is designed to be easy to control in order to facilitate future learning of fast trajectory tracking control. The resulting robot is precise at low speeds using a simple PID controller while reaching high velocities of up to 12 m/s in task space and 1500 deg/s in joint space. This arm will enable new applications in fast changing and uncertain task like robot table tennis while being a sophisticated and reproducible test-bed for robot skill learning methods. Construction details are available.",
"title": ""
},
{
"docid": "236dc9aa7d8c78698cbff770184db32b",
"text": "The prevalence of diet-related chronic diseases strongly impacts global health and health services. Currently, it takes training and strong personal involvement to manage or treat these diseases. One way to assist with dietary assessment is through computer vision systems that can recognize foods and their portion sizes from images and output the corresponding nutritional information. When multiple food items may exist, a food segmentation stage should also be applied before recognition. In this study, we propose a method to detect and segment the food of already detected dishes in an image. The method combines region growing/merging techniques with a deep CNN-based food border detection. A semi-automatic version of the method is also presented that improves the result with minimal user input. The proposed methods are trained and tested on non-overlapping subsets of a food image database including 821 images, taken under challenging conditions and annotated manually. The automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 92%, respectively, in roughly 0.5 seconds per image.",
"title": ""
},
{
"docid": "6a59641369fefcb7c7a917718f1d067c",
"text": "This paper presents an adaptive fuzzy sliding-mode dynamic controller (AFSMDC) of the car-like mobile robot (CLMR) for the trajectory tracking issue. First, a kinematics model of the nonholonomic CLMR is introduced. Then, according to the Lagrange formula, a dynamic model of the CLMR is created. For a real time trajectory tracking problem, an optimal controller capable of effectively driving the CLMR to track the desired trajectory is necessary. Therefore, an AFSMDC is proposed to accomplish the tracking task and to reduce the effect of the external disturbances and system uncertainties of the CLMR. The proposed controller could reduce the tracking errors between the output of the velocity controller and the real velocity of the CLMR. Therefore, the CLMR could track the desired trajectory without posture and orientation errors. Additionally, the stability of the proposed controller is proven by utilizing the Lyapunov stability theory. Finally, the simulation results validate the effectiveness of the proposed AFSMDC.",
"title": ""
},
{
"docid": "d103d856c51a4744d563dff2eff224a7",
"text": "Automotive engines is an important application for model-based diagnosis because of legislative regulations. A diagnosis system for the air-intake system of a turbo-charged engine is constructed. The design is made in a systematic way and follows a framework of hypothesis testing. Different types of sensor faults and leakages are considered. It is shown how many different types of fault models, e.g., additive and multiplicative faults, can be used within one common diagnosis system, and using the same underlying design principle. The diagnosis system is experimentally validated on a real engine using industry-standard dynamic test-cycles.",
"title": ""
},
{
"docid": "f19057578e0fce86e57d762d5805e676",
"text": "A polymer network of intranuclear lamin filaments underlies the nuclear envelope and provides mechanical stability to the nucleus in metazoans. Recent work demonstrates that the expression of A-type lamins scales positively with the stiffness of the cellular environment, thereby coupling nuclear and extracellular mechanics. Using the spectrin-actin network at the erythrocyte plasma membrane as a model, we contemplate how the relative stiffness of the nuclear scaffold impinges on the growing number of interphase-specific nuclear envelope remodeling events, including recently discovered, nuclear envelope-specialized quality control mechanisms. We suggest that a stiffer lamina impedes these remodeling events, necessitating local lamina remodeling and/or concomitant scaling of the efficacy of membrane-remodeling machineries that act at the nuclear envelope.",
"title": ""
},
{
"docid": "e23cebac640a47643b3a3249eae62f89",
"text": "Objective: To assess the factors that contribute to impaired quinine clearance in acute falciparum malaria. Patients: Sixteen adult Thai patients with severe or moderately severe falciparum malaria were studied, and 12 were re-studied during convalescence. Methods: The clearance of quinine, dihydroquinine (an impurity comprising up to 10% of commercial quinine formulations), antipyrine (a measure of hepatic mixed-function oxidase activity), indocyanine green (ICG) (a measure of liver blood flow), and iothalamate (a measure of glomerular filtration rate) were measured simultaneously, and the relationship of these values to the␣biotransformation of quinine to the active metabolite 3-hydroxyquinine was assessed. Results: During acute malaria infection, the systemic clearance of quinine, antipyrine and ICG and the biotransformation of quinine to 3-hydroxyquinine were all reduced significantly when compared with values during convalescence. Iothalamate clearance was not affected significantly and did not correlate with the clearance of any of the other compounds. The clearance of total and free quinine correlated significantly with antipyrine clearance (r s = 0.70, P = 0.005 and r s = 0.67, P = 0.013, respectively), but not with ICG clearance (r s = 0.39 and 0.43 respectively, P > 0.15). In a multiple regression model, antipyrine clearance and plasma protein binding accounted for 71% of the variance in total quinine clearance in acute malaria. The pharmacokinetic properties of dihydroquinine were generally similar to those of quinine, although dihydroquinine clearance was less affected by acute malaria. The mean ratio of quinine to 3-hydroxyquinine area under the plasma concentration-time curve (AUC) values in acute malaria was 12.03 compared with 6.92 during convalescence P=0.01. The mean plasma protein binding of 3-hydroxyquinine was 46%, which was significantly lower than that of quinine (90.5%) or dihydroquinine (90.5%). Conclusion: The reduction in quinine clearance in acute malaria results predominantly from a disease-induced dysfunction in hepatic mixed-function oxidase activity (principally CYP 3A) which impairs the conversion of quinine to its major metabolite, 3-hydroxyquinine. The metabolite contributes approximately 5% of the antimalarial activity of the parent compound in malaria, but up to 10% during convalescence.",
"title": ""
},
{
"docid": "b66846f076d41c8be3f5921cc085d997",
"text": "We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are “smoother” and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.",
"title": ""
},
{
"docid": "45d72f6c70c034122c86301be9531e97",
"text": "Multiple Classifier Systems (MCS) have been widely studied as an alternative for increasing accuracy in pattern recognition. One of the most promising MCS approaches is Dynamic Selection (DS), in which the base classifiers are selected on the fly, according to each new sample to be classified. This paper provides a review of the DS techniques proposed in the literature from a theoretical and empirical point of view. We propose an updated taxonomy based on the main characteristics found in a dynamic selection system: (1) The methodology used to define a local region for the estimation of the local competence of the base classifiers; (2) The source of information used to estimate the level of competence of the base classifiers, such as local accuracy, oracle, ranking and probabilistic models, and (3) The selection approach, which determines whether a single or an ensemble of classifiers is selected. We categorize the main dynamic selection techniques in the DS literature based on the proposed taxonomy. We also conduct an extensive experimental analysis, considering a total of 18 state-of-the-art dynamic selection techniques, as well as static ensemble combination and single classification models. To date, this is the first analysis comparing all the key DS techniques under the same experimental protocol. Furthermore, we also present several perspectives and open research questions that can be used as a guide for future works in this domain. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e1b7fc500b064f359c67772d046b4cde",
"text": "We propose a novel regularization technique for supervised and semi-supervised training of large models like deep neural network. By including into objective function the local smoothness of predictive distribution around each training data point, not only were we able to extend the work of (Goodfellow et al. (2015)) to the setting of semi-supervised training, we were also able to eclipse current state of the art supervised and semi-supervised methods on the permutation invariant MNIST classification task.",
"title": ""
},
{
"docid": "529ca36809a7052b9495279aa1081fcc",
"text": "To effectively control complex dynamical systems, accurate nonlinear models are typically needed. However, these models are not always known. In this paper, we present a data-driven approach based on Gaussian processes that learns models of quadrotors operating in partially unknown environments. What makes this challenging is that if the learning process is not carefully controlled, the system will go unstable, i.e., the quadcopter will crash. To this end, barrier certificates are employed for safe learning. The barrier certificates establish a non-conservative forward invariant safe region, in which high probability safety guarantees are provided based on the statistics of the Gaussian Process. A learning controller is designed to efficiently explore those uncertain states and expand the barrier certified safe region based on an adaptive sampling scheme. Simulation results are provided to demonstrate the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "7569c7f3983c608151fb5bbb093b3293",
"text": "A unilateral probe-fed rectangular dielectric resonator antenna (DRA) with a very small ground plane is investigated. The small ground plane simultaneously works as an excitation patch that excites the fundamental TE111 mode of the DRA, which is an equivalent magnetic dipole. By combining this equivalent magnetic dipole and the electric dipole of the probe, a lateral radiation pattern can be obtained. This complementary antenna has the same E- and H-Planes patterns with low back radiation. Moreover, the cardioid-shaped pattern can be easily steered in the horizontal plane by changing the angular position of the patch (ground). To verify the idea, a prototype operating in 3.5-GHz long term evolution band (3.4–3.6 GHz) was fabricated and measured, with reasonable agreement between the measured and simulated results obtained. It is found that the measured 15-dB front-to-back-ratio bandwidth is 10.9%.",
"title": ""
},
{
"docid": "6e9e448eb2313ca76106684e6c126c55",
"text": "We compared anthropometric and fitness performance data from graduate male youth players from an elite soccer academy who on leaving the institution were either successful or not in progressing to higher standards of play. Altogether, 161 players were grouped according to whether they achieved international or professional status or remained amateur. Measures were taken across three age categories (under 14, 15 and 16 years of age). Players were assessed using standard measures of anthropometric and fitness characteristics. The skeletal age of players was also measured to determine maturity status. Multivariate analysis (MANCOVA) identified a significant (p<0.001) effect for playing status. Univariate analysis revealed a significant difference in maturity status in amateurs and professionals versus internationals (p<0.05), in body mass in professionals versus amateurs (d=0.56, p<0.05), in height (d=0.85, p<0.01) and maximal anaerobic power (d=0.79, p<0.01) in both professionals and internationals versus amateurs. There was also a significant difference in counter-movement jump (d=0.53, p<0.05) and 40-m sprint time (d=0.50, p<0.05) in internationals versus amateurs, as well as a significant main effect for age and playing position (p<0.001). Significant differences were reported for maturity status, body mass, height, peak concentric torque, maximal anaerobic power, and sprint and jump performance with results dependant on age category and playing position. These results suggest that anthropometric and fitness assessments of elite youth soccer players can play a part in determining their chances of proceeding to higher achievement levels.",
"title": ""
}
] |
scidocsrr
|
453fd2fcd597a406d77b6fa4aca788eb
|
Skeleton-Based Action Recognition with Synchronous Local and Non-local Spatio-temporal Learning and Frequency Attention
|
[
{
"docid": "210a1dda2fc4390a5b458528b176341e",
"text": "Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. However, it remains unclear and challenging to make use of this intrinsic property via deep networks. In this paper, we propose a non-local recurrent network (NLRN) as the first attempt to incorporate non-local operations into a recurrent neural network (RNN) for image restoration. The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks for end-to-end training to capture deep feature correlation between each location and its neighborhood. (2) We fully employ the RNN structure for its parameter efficiency and allow deep feature correlation to be propagated along adjacent recurrent states. This new design boosts robustness against inaccurate correlation estimation due to severely degraded images. (3) We show that it is essential to maintain a confined neighborhood for computing deep feature correlation given degraded images. This is in contrast to existing practice [43] that deploys the whole image. Extensive experiments on both image denoising and super-resolution tasks are conducted. Thanks to the recurrent non-local operations and correlation propagation, the proposed NLRN achieves superior results to state-of-the-art methods with many fewer parameters. The code is available at https://github.com/Ding-Liu/NLRN.",
"title": ""
},
{
"docid": "b5453d9e4385d5a5ff77997ad7e3f4f0",
"text": "We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the nonlocal means (NL-means), based on a nonlocal averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.",
"title": ""
}
] |
[
{
"docid": "bef86730221684b8e9236cb44179b502",
"text": "secure software. In order to find the real-life issues, this case study was initiated to investigate whether the existing FDD can withstand requirements change and software security altogether. The case study was performed in controlled environment – in a course called Application Development—a four credit hours course at UTM. The course began by splitting up the class to seven software development groups and two groups were chosen to implement the existing process of FDD. After students were given an introduction to FDD, they started to adapt the processes to their proposed system. Then students were introduced to the basic concepts on how to make software systems secure. Though, they were still new to security and FDD, however, this study produced a lot of interest among the students. The students seemed to enjoy the challenge of creating secure system using FDD model.",
"title": ""
},
{
"docid": "f202e380dfd1022e77a04212394be7e1",
"text": "As usage of cloud computing increases, customers are mainly concerned about choosing cloud infrastructure with sufficient security. Concerns are greater in the multitenant environment on a public cloud. This paper addresses the security assessment of OpenStack open source cloud solution and virtual machine instances with different operating systems hosted in the cloud. The methodology and realized experiments target vulnerabilities from both inside and outside the cloud. We tested four different platforms and analyzed the security assessment. The main conclusions of the realized experiments show that multi-tenant environment raises new security challenges, there are more vulnerabilities from inside than outside and that Linux based Ubuntu, CentOS and Fedora are less vulnerable than Windows. We discuss details about these vulnerabilities and show how they can be solved by appropriate patches and other solutions. Keywords-Cloud Computing; Security Assessment; Virtualization.",
"title": ""
},
{
"docid": "a7607444b58f0e86000c7f2d09551fcc",
"text": "Background modeling is a critical component for various vision-based applications. Most traditional methods tend to be inefficient when solving large-scale problems. In this paper, we introduce sparse representation into the task of large-scale stable-background modeling, and reduce the video size by exploring its discriminative frames. A cyclic iteration process is then proposed to extract the background from the discriminative frame set. The two parts combine to form our sparse outlier iterative removal (SOIR) algorithm. The algorithm operates in tensor space to obey the natural data structure of videos. Experimental results show that a few discriminative frames determine the performance of the background extraction. Furthermore, SOIR can achieve high accuracy and high speed simultaneously when dealing with real video sequences. Thus, SOIR has an advantage in solving large-scale tasks.",
"title": ""
},
{
"docid": "b91f80bc17de9c4e15ec80504e24b045",
"text": "Motivated by the design of the well-known Enigma machine, we present a novel ultra-lightweight encryption scheme, referred to as Hummingbird, and its applications to a privacy-preserving identification and mutual authentication protocol for RFID applications. Hummingbird can provide the designed security with a small block size and is therefore expected to meet the stringent response time and power consumption requirements described in the ISO protocol without any modification of the current standard. We show that Hummingbird is resistant to the most common attacks such as linear and differential cryptanalysis. Furthermore, we investigate some properties for integrating the Hummingbird into a privacypreserving identification and mutual authentication protocol.",
"title": ""
},
{
"docid": "0f853c6ccf6ce4cf025050135662f725",
"text": "This paper describes a technique of applying Genetic Algorithm (GA) to network Intrusion Detection Systems (IDSs). A brief overview of the Intrusion Detection System, genetic algorithm, and related detection techniques is presented. Parameters and evolution process for GA are discussed in detail. Unlike other implementations of the same problem, this implementation considers both temporal and spatial information of network connections in encoding the network connection information into rules in IDS. This is helpful for identification of complex anomalous behaviors. This work is focused on the TCP/IP network protocols.",
"title": ""
},
{
"docid": "b7673dbe46a1118511d811241940e328",
"text": "A 100-MHz–2-GHz closed-loop analog in-phase/ quadrature correction circuit for digital clocks is presented. The proposed circuit consists of a phase-locked loop- type architecture for quadrature error correction. The circuit corrects the phase error to within a 1.5° up to 1 GHz and to within 3° at 2 GHz. It consumes 5.4 mA from a 1.2 V supply at 2 GHz. The circuit was designed in UMC 0.13-<inline-formula> <tex-math notation=\"LaTeX\">$\\mu \\text{m}$ </tex-math></inline-formula> mixed-mode CMOS with an active area of <inline-formula> <tex-math notation=\"LaTeX\">$102\\,\\,\\mu {\\mathrm{ m}} \\times 95\\,\\,\\mu {\\mathrm{ m}}$ </tex-math></inline-formula>. The impact of duty cycle distortion has been analyzed. High-frequency quadrature measurement related issues have been discussed. The proposed circuit was used in two different applications for which the functionality has been verified.",
"title": ""
},
{
"docid": "b32ec798991a2e02813ba617dc93a828",
"text": "To investigate the mechanisms of excitotoxic effects of glutamate on human neuroblastoma SH-SY5Y cells. SH-SY5Y cell viability was measured by MTT assay. Other damaged profile was detected by lactate dehydrogenase (LDH) release and by 4′, 6-diamidino-2-phenylindole (DAPI) staining. The cytosolic calcium concentration was tested by calcium influx assay. The glutamate-induced oxidative stress was analyzed by cytosolic glutathione assay, superoxide dismutase (SOD) assay and extracellular malondialdehyde (MDA) assay. Glutamate treatment caused damage in SHSY5Y cells, including the decrease of cell viability, the increase of LDH release and the alterations of morphological structures. Furthermore, the concentration of cytoplasmic calcium in SH-SY5Y cells was not changed within 20 min following glutamate treatment, while cytosolic calcium concentration significantly increased within 24 h after glutamate treatment, which could not be inhibited by MK801, an antagonist of NMDA receptors, or by LY341495, an antagonist of metabotropic glutamate receptors. On the other hand, oxidative damage was observed in SH-SY5Y cells treated with glutamate, including decreases in glutathione content and SOD activity, and elevation of MDA level, all of which could be alleviated by an antioxidant Tanshinone IIA (Tan IIA, a major active ingredient from a Chinese plant Salvia Miltiorrhiza Bge). Glutamate exerts toxicity in human neuroblastoma SH-SY5Y cells possibly through oxidative damage, not through calcium homeostasis destruction mediated by NMDA receptors. 探讨谷氨酸导致人神经母细胞瘤细胞(SH-SY5Y cells)兴奋性毒损伤的机制。 MTT法检测SH-SY5Y细胞存活率; 测定乳酸脱氢酶释放量观察细胞损伤程度; DAPI染色法观察细胞凋亡形态学特点; 钙流法检测胞浆钙离子浓度变化; 以胞内谷胱甘肽、 超氧化物歧化酶活性和胞外丙二醛含量检测谷氨酸引发SH-SY5Y细胞的氧化应激状态。 谷氨酸导致SH-SY5Y细胞受损, 包括存活率下降、 乳酸脱氢酶释放量增多及形态结构发生改变; 谷氨酸处理20 min 后, 胞浆钙离子浓度无显著改变, 而处理24 h 后, 胞浆钙离子大量增加, 且MK801 (NMDA受体拮抗剂)及LY341495 (代谢型谷氨酸受体拮抗剂)均不能抑制钙离子内流的增多; 谷氨酸可导致SH-SY5Y氧化损伤, 包括胞内谷胱甘肽含量减少、 超氧化物歧化酶活性降低、 胞外脂质过氧化产物丙二醛水平升高等, 而丹参酮IIA (一种抗氧化剂)可减轻这些氧化损伤。 谷氨酸导致SH-SY5Y细胞兴奋性毒损伤可能是通过氧化损伤产生的, 而不依赖于NMDA 受体介导的钙稳态的破坏。",
"title": ""
},
{
"docid": "3085d2de614b6816d7a66cb62823824e",
"text": "Plastic debris is known to undergo fragmentation at sea, which leads to the formation of microscopic particles of plastic; the so called 'microplastics'. Due to their buoyant and persistent properties, these microplastics have the potential to become widely dispersed in the marine environment through hydrodynamic processes and ocean currents. In this study, the occurrence and distribution of microplastics was investigated in Belgian marine sediments from different locations (coastal harbours, beaches and sublittoral areas). Particles were found in large numbers in all samples, showing the wide distribution of microplastics in Belgian coastal waters. The highest concentrations were found in the harbours where total microplastic concentrations of up to 390 particles kg(-1) dry sediment were observed, which is 15-50 times higher than reported maximum concentrations of other, similar study areas. The depth profile of sediment cores suggested that microplastic concentrations on the beaches reflect the global plastic production increase.",
"title": ""
},
{
"docid": "a16be992aa947c8c5d2a7c9899dfbcd8",
"text": "The effect of the Eureka Spring (ES) appliance was investigated on 37 consecutively treated, noncompliant patients with bilateral Class II malocclusions. Lateral cephalographs were taken at the start of orthodontic treatment (T1), at insertion of the ES (T2), and at removal of the ES (T3). The average treatment interval between T2 and T3 was four months. The Class II correction occurred almost entirely by dentoalveolar movement and was almost equally distributed between the maxillary and mandibular dentitions. The rate of molar correction was 0.7 mm/mo. There was no change in anterior face height, mandibular plane angle, palatal plane angle, or gonial angle with treatment. There was a 2 degrees change in the occlusal plane resulting from intrusion of the maxillary molar and the mandibular incisor. Based on the results in this sample, the ES appliance was very effective in correcting Class II malocclusions in noncompliant patients without increasing the vertical dimension.",
"title": ""
},
{
"docid": "04b14e2795afc0faaa376bc17ead0aaf",
"text": "In this paper, an integrated MEMS gyroscope array method composed of two levels of optimal filtering was designed to improve the accuracy of gyroscopes. In the firstlevel filtering, several identical gyroscopes were combined through Kalman filtering into a single effective device, whose performance could surpass that of any individual sensor. The key of the performance improving lies in the optimal estimation of the random noise sources such as rate random walk and angular random walk for compensating the measurement values. Especially, the cross correlation between the noises from different gyroscopes of the same type was used to establish the system noise covariance matrix and the measurement noise covariance matrix for Kalman filtering to improve the performance further. Secondly, an integrated Kalman filter with six states was designed to further improve the accuracy with the aid of external sensors such as magnetometers and accelerometers in attitude determination. Experiments showed that three gyroscopes with a bias drift of 35 degree per hour could be combined into a virtual gyroscope with a drift of 1.07 degree per hour through the first-level filter, and the bias drift was reduced to 0.53 degree per hour after the second-level filtering. It proved that the proposed integrated MEMS gyroscope array is capable of improving the accuracy of the MEMS gyroscopes, which provides the possibility of using these low cost MEMS sensors in high-accuracy application areas.",
"title": ""
},
{
"docid": "2fdf6538c561e05741baafe43ec6f145",
"text": "Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent are effective for tasks involving sequences, visual and otherwise. We describe a class of recurrent convolutional architectures which is end-to-end trainable and suitable for large-scale visual understanding tasks, and demonstrate the value of these models for activity recognition, image captioning, and video description. In contrast to previous models which assume a fixed visual representation or perform simple temporal averaging for sequential processing, recurrent convolutional models are “doubly deep” in that they learn compositional representations in space and time. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Differentiable recurrent models are appealing in that they can directly map variable-length inputs (e.g., videos) to variable-length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent sequence models are directly connected to modern visual convolutional network models and can be jointly trained to learn temporal dynamics and convolutional perceptual representations. Our results show that such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined or optimized.",
"title": ""
},
{
"docid": "c7059c650323a08ac7453ad4185e6c4f",
"text": "Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.1",
"title": ""
},
{
"docid": "fb53b5d48152dd0d71d1816a843628f6",
"text": "Online banking and e-commerce have been experiencing rapid growth over the past few years and show tremendous promise of growth even in the future. This has made it easier for fraudsters to indulge in new and abstruse ways of committing credit card fraud over the Internet. This paper focuses on real-time fraud detection and presents a new and innovative approach in understanding spending patterns to decipher potential fraud cases. It makes use of Self Organization Map to decipher, filter and analyze customer behavior for detection of fraud.",
"title": ""
},
{
"docid": "e6c126454c7d7e99524ff55887d9b15d",
"text": "Dense 3D reconstruction of real world objects containing textureless, reflective and specular parts is a challenging task. Using general smoothness priors such as surface area regularization can lead to defects in the form of disconnected parts or unwanted indentations. We argue that this problem can be solved by exploiting the object class specific local surface orientations, e.g. a car is always close to horizontal in the roof area. Therefore, we formulate an object class specific shape prior in the form of spatially varying anisotropic smoothness terms. The parameters of the shape prior are extracted from training data. We detail how our shape prior formulation directly fits into recently proposed volumetric multi-label reconstruction approaches. This allows a segmentation between the object and its supporting ground. In our experimental evaluation we show reconstructions using our trained shape prior on several challenging datasets.",
"title": ""
},
{
"docid": "87993df44973bd83724baace13ea1aa7",
"text": "OBJECTIVE\nThe objective of this research was to determine the relative impairment associated with conversing on a cellular telephone while driving.\n\n\nBACKGROUND\nEpidemiological evidence suggests that the relative risk of being in a traffic accident while using a cell phone is similar to the hazard associated with driving with a blood alcohol level at the legal limit. The purpose of this research was to provide a direct comparison of the driving performance of a cell phone driver and a drunk driver in a controlled laboratory setting.\n\n\nMETHOD\nWe used a high-fidelity driving simulator to compare the performance of cell phone drivers with drivers who were intoxicated from ethanol (i.e., blood alcohol concentration at 0.08% weight/volume).\n\n\nRESULTS\nWhen drivers were conversing on either a handheld or hands-free cell phone, their braking reactions were delayed and they were involved in more traffic accidents than when they were not conversing on a cell phone. By contrast, when drivers were intoxicated from ethanol they exhibited a more aggressive driving style, following closer to the vehicle immediately in front of them and applying more force while braking.\n\n\nCONCLUSION\nWhen driving conditions and time on task were controlled for, the impairments associated with using a cell phone while driving can be as profound as those associated with driving while drunk.\n\n\nAPPLICATION\nThis research may help to provide guidance for regulation addressing driver distraction caused by cell phone conversations.",
"title": ""
},
{
"docid": "af08fa19de97eed61afd28893692e7ec",
"text": "OpenACC is a new accelerator programming interface that provides a set of OpenMP-like loop directives for the programming of accelerators in an implicit and portable way. It allows the programmer to express the offloading of data and computations to accelerators, such that the porting process for legacy CPU-based applications can be significantly simplified. This paper focuses on the performance aspects of OpenACC using two micro benchmarks and one real-world computational fluid dynamics application. Both evaluations show that in general OpenACC performance is approximately 50\\% lower than CUDA. However, for some applications it can reach up to 98\\% with careful manual optimizations. The results also indicate several limitations of the OpenACC specification that hamper full use of the GPU hardware resources, resulting in a significant performance gap when compared to a fully tuned CUDA code. The lack of a programming interface for the shared memory in particular results in as much as three times lower performance.",
"title": ""
},
{
"docid": "ced0328f339248158e8414c3315330c5",
"text": "Novel inline coplanar-waveguide (CPW) bandpass filters composed of quarter-wavelength stepped-impedance resonators are proposed, using loaded air-bridge enhanced capacitors and broadside-coupled microstrip-to-CPW transition structures for both wideband spurious suppression and size miniaturization. First, by suitably designing the loaded capacitor implemented by enhancing the air bridges printed over the CPW structure and the resonator parameters, the lower order spurious passbands of the proposed filter may effectively be suppressed. Next, by adopting the broadside-coupled microstrip-to-CPW transitions as the fed structures to provide required input and output coupling capacitances and high attenuation level in the upper stopband, the filter with suppressed higher order spurious responses may be achieved. In this study, two second- and fourth-order inline bandpass filters with wide rejection band are implemented and thoughtfully examined. Specifically, the proposed second-order filter has its stopband extended up to 13.3f 0, where f0 stands for the passband center frequency, and the fourth-order filter even possesses better stopband up to 19.04f0 with a satisfactory rejection greater than 30 dB",
"title": ""
},
{
"docid": "f85b08a0e3f38c1471b3c7f05e8a17ba",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. A state tracking module is primarily meant to act as support for a dialog policy but it can also be used as support for dialog corpus summarization and other kinds of information extraction from transcription of dialogs. From a probabilistic view, this is achieved by maintaining a posterior distribution over hidden dialog states composed, in the simplest case, of a set of context dependent variables. Once a dialog policy is defined, deterministic or learnt, it is in charge of selecting an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset that has been converted for the occasion in order to fit the relaxed assumption of a machine reading formulation where the true state is only provided at the very end of each dialog instead of providing the state updates at the utterance level. We show that the proposed tracker gives encouraging results. Finally, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management.",
"title": ""
},
{
"docid": "ed39af901c58a8289229550084bc9508",
"text": "Digital elevation maps are simple yet powerful representations of complex 3-D environments. These maps can be built and updated using various sensors and sensorial data processing algorithms. This paper describes a novel approach for modeling the dynamic 3-D driving environment, the particle-based dynamic elevation map, each cell in this map having, in addition to height, a probability distribution of speed in order to correctly describe moving obstacles. The dynamic elevation map is represented by a population of particles, each particle having a position, a height, and a speed. Particles move from one cell to another based on their speed vectors, and they are created, multiplied, or destroyed using an importance resampling mechanism. The importance resampling mechanism is driven by the measurement data provided by a stereovision sensor. The proposed model is highly descriptive for the driving environment, as it can easily provide an estimation of the height, speed, and occupancy of each cell in the grid. The system was proven robust and accurate in real driving scenarios, by comparison with ground truth data.",
"title": ""
},
{
"docid": "678ef706d4cb1c35f6b3d82bf25a4aa7",
"text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.",
"title": ""
}
] |
scidocsrr
|
591cad4ec3de9279914026808ada621f
|
A methodology for generating natural language paraphrases
|
[
{
"docid": "baefc6e7e7968651f3e36acfd62b094d",
"text": "The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language—words, phrases, and sentences—is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.",
"title": ""
},
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
}
] |
[
{
"docid": "ae6a3b7943c0611538192c49ae3e57c9",
"text": "Mindfulness, a concept originally derived from Buddhist psychology, is essential for some well-known clinical interventions. Therefore an instrument for measuring mindfulness is useful. We report here on two studies constructing and validating the Freiburg Mindfulness Inventory (FMI) including a short form. A preliminary questionnaire was constructed through expert interviews and extensive literature analysis and tested in 115 subjects attending mindfulness meditation retreats. This psychometrically sound 30-item scale with an internal consistency of Cronbach alpha = .93 was able to significantly demonstrate the increase in mindfulness after the retreat and to discriminate between experienced and novice meditators. In a second study we broadened the scope of the concept to 86 subjects without meditation experience, 117 subjects with clinical problems, and 54 participants from retreats. Reducing the scale to a short form with 14 items resulted in a semantically robust and psychometrically stable (alpha = .86) form. Correlation 0191-8869/$ see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2005.11.025 * Corresponding author. Address: University of Northampton, School of Social Sciences, Division of Psychology and Samueli Institute—European Office, Boughton Green Road, Northampton NN2 7AL, UK. E-mail address: [email protected] (H. Walach). www.elsevier.com/locate/paid Personality and Individual Differences 40 (2006) 1543–1555 with other relevant constructs (self-awareness, dissociation, global severity index, meditation experience in years) was significant in the medium to low range of correlations and lends construct validity to the scale. Principal Component Analysis suggests one common factor. This short scale is sensitive to change and can be used also with subjects without previous meditation experience. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c73b5b81fa75676e96309610b4c6ac81",
"text": "We present a theory of excess stock market volatility, in which market movements are due to trades by very large institutional investors in relatively illiquid markets. Such trades generate significant spikes in returns and volume, even in the absence of important news about fundamentals. We derive the optimal trading behavior of these investors, which allows us to provide a unified explanation for apparently disconnected empirical regularities in returns, trading volume and investor size.",
"title": ""
},
{
"docid": "3c4219212dfeb01d2092d165be0cfb44",
"text": "Classical substrate noise analysis considers the silicon resistivity of an integrated circuit only as doping dependent besides neglecting diffusion currents as well. In power circuits minority carriers are injected into the substrate and propagate by drift–diffusion. In this case the conductivity of the substrate is spatially modulated and this effect is particularly important in high injection regime. In this work a description of the coupling between majority and minority drift–diffusion currents is presented. A distributed model of the substrate is then proposed to take into account the conductivity modulation and its feedback on diffusion processes. The model is expressed in terms of equivalent circuits in order to be fully compatible with circuit simulators. The simulation results are then discussed for diodes and bipolar transistors and compared to the ones obtained from physical device simulations and measurements. 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "55ae4973d989d1dbebf7be61dbc532af",
"text": "and a Ph.D in statistics from Stanford University in 1984. His Ph.D thesis is titled Principal Curves and Surfaces, which is now an active area of research, and bears a close relationship to self organizing maps. He spent 9 years in the Statistics and Data Analysis research department at AT&T Bell laboratories, Murray Hill, New Jersey. His research has focused on applied modelling problems, especially non-parametric regression and classiication. Robert Tibshirani is a professor in the Statistics and Biostatistics departments at University of Toronto. He received a B.Sc. degree from the University of Waterloo, and a Ph.D in statistics from Stanford University in 1984. He has many research articles on nonparametric regression and classiication. has been an active researcher on bootstrap technology for the past 11 years. His 1984 Ph.D thesis spawned the currently lively research area known as Local Likelihood. He is a recent recipient of a Guggen-heim Foundation fellowship. He is a fellow of the American Statistical association and the Institute of Mathematical Statistics. HASTIE AND TIBSHIRANI: DISCRIMINANT ADAPTIVE NEAREST NEIGHBOR CLASSIFICATION 9 reduction. 10] proposed a technique close to ours for the two class problem. In our terminology they used our metric with W = I and = 0, with B determined locally in a neighborhood of size K M. In eeect this extends the neighborhood innnitely in the null space of the local between class directions, but they restrict this neighborhood to the original K M observations. This amounts to projecting the local data onto the line joining the two local centroids. In our experiments this approach tended to perform on average 10% worse than our metric, and we did not pursue it further. 11] extended this to J > 2 classes, but here their approach diiers even more from ours. They computed a weighted average of the J local centroids from the overall average, and project the data onto it, a one-dimensional projection. Even with = 0 we project the data onto the subspace containing the local centroids, and deform the metric appropriately in that subspace. 12] recognized a shortfall of the Short and Fukanaga approach, since the averaging can cause cancellation, and proposed other metrics to avoid this. Although their metrics diier from ours, the Chi-squared motivation for our metric (3) was inspired by the metrics developed in their paper. We have not tested out their proposals, but they report results of experiments with …",
"title": ""
},
{
"docid": "b4e1fdeb6d467eddfea074b802558fb8",
"text": "This paper proposes a novel and more accurate iris segmentation framework to automatically segment iris region from the face images acquired with relaxed imaging under visible or near-infrared illumination, which provides strong feasibility for applications in surveillance, forensics and the search for missing children, etc. The proposed framework is built on a novel total-variation based formulation which uses l1 norm regularization to robustly suppress noisy texture pixels for the accurate iris localization. A series of novel and robust post processing operations are introduced to more accurately localize the limbic boundaries. Our experimental results on three publicly available databases, i.e., FRGC, UBIRIS.v2 and CASIA.v4-distance, achieve significant performance improvement in terms of iris segmentation accuracy over the state-of-the-art approaches in the literature. Besides, we have shown that using iris masks generated from the proposed approach helps to improve iris recognition performance as well. Unlike prior work, all the implementations in this paper are made publicly available to further advance research and applications in biometrics at-d-distance.",
"title": ""
},
{
"docid": "40ec8caea52ba75a6ad1e100fb08e89a",
"text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.",
"title": ""
},
{
"docid": "fc5b1cb9427eb0b252eb15bcdaed4ece",
"text": "In this paper, the deployment of an unmanned aerial vehicle (UAV) as a flying base station used to provide the fly wireless communications to a given geographical area is analyzed. In particular, the coexistence between the UAV, that is transmitting data in the downlink, and an underlaid device-to-device (D2D) communication network is considered. For this model, a tractable analytical framework for the coverage and rate analysis is derived. Two scenarios are considered: a static UAV and a mobile UAV. In the first scenario, the average coverage probability and the system sum-rate for the users in the area are derived as a function of the UAV altitude and the number of D2D users. In the second scenario, using the disk covering problem, the minimum number of stop points that the UAV needs to visit in order to completely cover the area is computed. Furthermore, considering multiple retransmissions for the UAV and D2D users, the overall outage probability of the D2D users is derived. Simulation and analytical results show that, depending on the density of D2D users, the optimal values for the UAV altitude, which lead to the maximum system sum-rate and coverage probability, exist. Moreover, our results also show that, by enabling the UAV to intelligently move over the target area, the total required transmit power of UAV while covering the entire area, can be minimized. Finally, in order to provide full coverage for the area of interest, the tradeoff between the coverage and delay, in terms of the number of stop points, is discussed.",
"title": ""
},
{
"docid": "0c12fd61acd9e02be85b97de0cc79801",
"text": "As deep nets are increasingly used in applications suited for mobile devices, a fundamental dilemma becomes apparent: the trend in deep learning is to grow models to absorb everincreasing data set sizes; however mobile devices are designed with very little memory and cannot store such large models. We present a novel network architecture, HashedNets, that exploits inherent redundancy in neural networks to achieve drastic reductions in model sizes. HashedNets uses a low-cost hash function to randomly group connection weights into hash buckets, and all connections within the same hash bucket share a single parameter value. These parameters are tuned to adjust to the HashedNets weight sharing architecture with standard backprop during training. Our hashing procedure introduces no additional memory overhead, and we demonstrate on several benchmark data sets that HashedNets shrink the storage requirements of neural networks substantially while mostly preserving generalization performance.",
"title": ""
},
{
"docid": "71c4e6e63eaeec06b5e8690c1a915c81",
"text": "Measuring the similarity between words, sentences, paragraphs and documents is an important component in various tasks such as information retrieval, document clustering, word-sense disambiguation, automatic essay scoring, short answer grading, machine translation and text summarization. This survey discusses the existing works on text similarity through partitioning them into three approaches; String-based, Corpus-based and Knowledge-based similarities. Furthermore, samples of combination between these similarities are presented.",
"title": ""
},
{
"docid": "d930041c5d2a946c5d0cfedbdc5bf52e",
"text": "In this article, we provide the nontechnical reader with a fundamental understanding of the components of virtual reality (VR) and a thorough discussion of the role VR has played in social science. First, we provide a brief overview of the hardware and equipment used to create VR and review common elements found within the virtual environment that may be of interest to social scientists, such as virtual humans and interactive, multisensory feedback. Then, we discuss the role of VR in existing social scientific research. Specifically, we review the literature on the study of VR as an object, wherein we discuss the effects of the technology on human users; VR as an application, wherein we consider real-world applications in areas such as medicine and education; and VR as a method, wherein we provide a comprehensive outline of studies in which VR technologies are used to study phenomena that have traditionally been studied in physical settings, such as nonverbal behavior and social interaction. We then present a content analysis of the literature, tracking the trends for this research over the last two decades. Finally, we present some possibilities for future research for interested social scientists.",
"title": ""
},
{
"docid": "2bb31e4565edc858453af69296a67ee6",
"text": "OBJECTIVES\nNetworks of franchised health establishments, providing a standardized set of services, are being implemented in developing countries. This article examines associations between franchise membership and family planning and reproductive health outcomes for both the member provider and the client.\n\n\nMETHODS\nRegression models are fitted examining associations between franchise membership and family planning and reproductive health outcomes at the service provider and client levels in three settings.\n\n\nRESULTS\nFranchising has a positive association with both general and family planning client volumes, and the number of family planning brands available. Similar associations with franchise membership are not found for reproductive health service outcomes. In some settings, client satisfaction is higher at franchised than other types of health establishments, although the association between franchise membership and client outcomes varies across the settings.\n\n\nCONCLUSIONS\nFranchise membership has apparent benefits for both the provider and the client, providing an opportunity to expand access to reproductive health services, although greater attention is needed to shift the focus from family planning to a broader reproductive health context.",
"title": ""
},
{
"docid": "adc03d95eea19cede1ea91aae733943b",
"text": "In this paper, we discuss the emerging application of device-free localization (DFL) using wireless sensor networks, which find people and objects in the environment in which the network is deployed, even in buildings and through walls. These networks are termed “RF sensor networks” because the wireless network itself is the sensor, using radio-frequency (RF) signals to probe the deployment area. DFL in cluttered multipath environments has been shown to be feasible, and in fact benefits from rich multipath channels. We describe modalities of measurements made by RF sensors, the statistical models which relate a person's position to channel measurements, and describe research progress in this area.",
"title": ""
},
{
"docid": "3b91e62d6e43172e68817f679dde5182",
"text": "We model the geodetically observed secular velocity field in northwestern Turkey with a block model that accounts for recoverable elastic-strain accumulation. The block model allows us to estimate internally consistent fault slip rates and locking depths. The northern strand of the North Anatolian fault zone (NAFZ) carries approximately four times as much right-lateral motion ( 24 mm/yr) as does the southern strand. In the Marmara Sea region, the data show strain accumulation to be highly localized. We find that a straight fault geometry with a shallow locking depth of 6–7 km fits the observed Global Positioning System velocities better than does a stepped fault geometry that follows the northern and eastern edges of the sea. This shallow locking depth suggests that the moment release associated with an earthquake on these faults should be smaller, by a factor of 2.3, than previously inferred assuming a locking depth of 15 km. Online material: an updated version of velocity-field data.",
"title": ""
},
{
"docid": "d3c7900e22ab8d4dd52fa12f47fbba09",
"text": "In this paper, an obstacle-surmounting-enabled lower limb exoskeleton with novel linkage joints that perfectly mimicked human motions was proposed. Currently, most lower exoskeletons that use linear actuators have a direct connection between the wearer and the controlled part. Compared to the existing joints, the novel linkage joint not only fitted better into compact chasis, but also provided greater torque when the joint was at a large bend angle. As a result, it extended the angle range of joint peak torque output. With any given power, torque was prioritized over rotational speed, because instead of rotational speed, sufficiency of torque is the premise for most joint actions. With insufficient torque, the exoskeleton will be a burden instead of enhancement to its wearer. With optimized distribution of torque among the joints, the novel linkage method may contribute to easier exoskeleton movements.",
"title": ""
},
{
"docid": "e2a39475a01eacbf3bdac1a6484e5a8e",
"text": "social systems, social search, social media, collective intelligence, wikinomics, crowd wisdom, smart mobs, mass collaboration, and human computation. The topic has been discussed extensively in books, popular press, and academia.1,5,15,23,29,35 But this body of work has considered mostly efforts in the physical world.23,29,30 Some do consider crowdsourcing systems on the Web, but only certain system types28,33 or challenges (for example, how to evaluate users12). This survey attempts to provide a global picture of crowdsourcing systems on the Web. We define and classify such systems, then describe a broad sample of systems. The sample CROWDSOURCING SYSTEMS enlist a multitude of humans to help solve a wide variety of problems. Over the past decade, numerous such systems have appeared on the World-Wide Web. Prime examples include Wikipedia, Linux, Yahoo! Answers, Mechanical Turk-based systems, and much effort is being directed toward developing many more. As is typical for an emerging area, this effort has appeared under many names, including peer production, user-powered systems, user-generated content, collaborative systems, community systems, Crowdsourcing Systems on the World-Wide Web DOI:10.1145/1924421.1924442",
"title": ""
},
{
"docid": "f415b38e6d43c8ed81ce97fd924def1b",
"text": "Collaborative filtering is one of the most successful and widely used methods of automated product recommendation in online stores. The most critical component of the method is the mechanism of finding similarities among users using product ratings data so that products can be recommended based on the similarities. The calculation of similarities has relied on traditional distance and vector similarity measures such as Pearson’s correlation and cosine which, however, have been seldom questioned in terms of their effectiveness in the recommendation problem domain. This paper presents a new heuristic similarity measure that focuses on improving recommendation performance under cold-start conditions where only a small number of ratings are available for similarity calculation for each user. Experiments using three different datasets show the superiority of the measure in new user cold-start conditions. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "02c41de2c0447eec4c5198bccdb1d414",
"text": "The paper contends that the use of cost-benefit analysis (CBA) for or against capital punishment is problematic insofar as CBA (1) commodifies and thus reduces the value of human life and (2) cannot quantify all costs and benefits. The paramount theories of punishment, retribution and utilitarianism, which are used as rationales for capital punishment, do not justify the use of cost-benefit analysis as part of that rationale. Calling on the theory of restorative justice, the paper recommends a change in the linguistic register used to describe the value of human beings. In particular, abolitionists should emphasize that human beings have essential value. INTRODUCTION Advocates of the death penalty use economics to justify the use of capital punishment. Scott Turow, an Illinois-based lawyer says it well when he comments that two arguments frequently used by death penalty advocates are that, “the death penalty is a deterrent to others and it is more cost effective than keeping an individual in jail for life” (Turow). Edward Elijas takes the point further in writing the following, “Let’s imagine for a moment there was no death penalty. The only reasonable sentence would a life sentence. This would be costly to the tax payers, not only for the cost of housing and feeding the prisoner but because of the numerous appeals which wastes man hours and money. By treating criminals in this manner, we are encouraging behavior that will result in a prison sentence. If there is no threat of death to one who commits a murder, than that person is guaranteed to be provided with a decent living environment until their next parole hearing. They are definitely not getting the punishment they deserve” (http://www.cwrl.utexas.edu/). According to the argument, whether a person convicted",
"title": ""
},
{
"docid": "b8dfe30c07f0caf46b3fc59406dbf017",
"text": "We describe an extensible approach to generating questions for the purpose of reading comprehension assessment and practice. Our framework for question generation composes general-purpose rules to transform declarative sentences into questions, is modular in that existing NLP tools can be leveraged, and includes a statistical component for scoring questions based on features of the input, output, and transformations performed. In an evaluation in which humans rated questions according to several criteria, we found that our implementation achieves 43.3% precisionat-10 and generates approximately 6.8 acceptable questions per 250 words of source text.",
"title": ""
},
{
"docid": "5bed93ba59d148b27d68ccd1cfcf986b",
"text": "Product line software engineering (PLSE) is an emerging software engineering paradigm, which guides organizations toward the development of products from core assets rather than the development of products one by one from scratch. In order to develop highly reusable core assets, PLSE must have the ability to exploit commonality and manage variability among products from a domain perspective. Feature modeling is one of the most popular domain analysis techniques, which analyzes commonality and variability in a domain to develop highly reusable core assets for a product line. Various attempts have been made to extend and apply it to the development of software product lines. However, feature modeling can be difficult and time-consuming without a precise understanding of the goals of feature modeling and the aid of practical guidelines. In this paper, we clarify the concept of features and the goals of feature modeling, and provide practical guidelines for successful product line software engineering. The authors have extensively used feature modeling in several industrial product line projects and the guidelines described in this paper are based on these experiences.",
"title": ""
},
{
"docid": "9d7c69a2d45c6f25636aba8fdf19ad2a",
"text": "BACKGROUND\nThe analysis of nasal anatomy, and especially the nasal bones including the osseocartilaginous vault, is significant for functional and aesthetic reasons.\n\n\nOBJECTIVES\nThe objective was to understand the anatomy of the nasal bones by establishing new descriptions, terms, and definitions because the existing parameters were insufficient. Adequate terminology was employed to harmonize the anthropometric and clinical measurements.\n\n\nMETHODS\nA two-part harvest technique consisting of resecting the specimen and then creating a replica of the skull was performed on 44 cadavers to obtain specific measurements.\n\n\nRESULTS\nThe nasal bones have an irregular, variable shape, and three distinct angles can be found along the dorsal profile line beginning with the nasion angle (NA), the dorsal profile angulation (DPA) and the kyphion angulation (KA). In 12% of cases, the caudal portion of the nasal bones was straight and without angulation resulting in a \"V-shape\" configuration. In 88% of cases, the caudal portion of the bone was angulated, which resulted in an \"S-shape\" nasal bone configuration. The intervening cephalic bone, nasion to sellion (N-S), represents the radix while the caudal bone, sellion to r (S-R), represents the bony dorsum.\n\n\nCONCLUSIONS\nBy standardizing and measuring existing nasal landmarks and understanding the different anatomic configurations of the nasal bones, rhinoplasty surgeons can better plan their operations within the radix and bony and osseocartilaginous vaults.",
"title": ""
}
] |
scidocsrr
|
6aea630e01bf073b07093003e93bef9e
|
Learning Like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images
|
[
{
"docid": "418a5ef9f06f8ba38e63536671d605c1",
"text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.",
"title": ""
},
{
"docid": "a2f91e55b5096b86f6fa92e701c62898",
"text": "The main question we address in this paper is how to use purely textual description of categories with no training images to learn visual classifiers for these categories. We propose an approach for zero-shot learning of object categories where the description of unseen categories comes in the form of typical text such as an encyclopedia entry, without the need to explicitly defined attributes. We propose and investigate two baseline formulations, based on regression and domain adaptation. Then, we propose a new constrained optimization formulation that combines a regression function and a knowledge transfer function with additional constraints to predict the classifier parameters for new classes. We applied the proposed approach on two fine-grained categorization datasets, and the results indicate successful classifier prediction.",
"title": ""
}
] |
[
{
"docid": "b3e90fdfda5346544f769b6dd7c3882b",
"text": "Bromelain is a complex mixture of proteinases typically derived from pineapple stem. Similar proteinases are also present in pineapple fruit. Beneficial therapeutic effects of bromelain have been suggested or proven in several human inflammatory diseases and animal models of inflammation, including arthritis and inflammatory bowel disease. However, it is not clear how each of the proteinases within bromelain contributes to its anti-inflammatory effects in vivo. Previous in vivo studies using bromelain have been limited by the lack of assays to control for potential differences in the composition and proteolytic activity of this naturally derived proteinase mixture. In this study, we present model substrate assays and assays for cleavage of bromelain-sensitive cell surface molecules can be used to assess the activity of constituent proteinases within bromelain without the need for biochemical separation of individual components. Commercially available chemical and nutraceutical preparations of bromelain contain predominately stem bromelain. In contrast, the proteinase activity of pineapple fruit reflects its composition of fruit bromelain>ananain approximately stem bromelain. Concentrated bromelain solutions (>50 mg/ml) are more resistant to spontaneous inactivation of their proteolytic activity than are dilute solutions, with the proteinase stability in the order of stem bromelain>fruit bromelain approximately ananain. The proteolytic activity of concentrated bromelain solutions remains relatively stable for at least 1 week at room temperature, with minimal inactivation by multiple freeze-thaw cycles or exposure to the digestive enzyme trypsin. The relative stability of concentrated versus dilute bromelain solutions to inactivation under physiologically relevant conditions suggests that delivery of bromelain as a concentrated bolus would be the preferred method to maximize its proteolytic activity in vivo.",
"title": ""
},
{
"docid": "2f0eb4a361ff9f09bda4689a1f106ff2",
"text": "The growth of Quranic digital publishing increases the need to develop a better framework to authenticate Quranic quotes with the original source automatically. This paper aims to demonstrate the significance of the quote authentication approach. We propose an approach to verify the e-citation of the Quranic quote as compared with original texts from the Quran. In this paper, we will concentrate mainly on discussing the Algorithm to verify the fundamental text for Quranic quotes.",
"title": ""
},
{
"docid": "05a76f64a6acbcf48b7ac36785009db3",
"text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.",
"title": ""
},
{
"docid": "40059f4cd570b658726745fa7b5ecf38",
"text": "Autonomous humanoid robots need high torque actuators to be able to walk and run. One problem in this context is the heat generated. In this paper we propose to use water evaporation to improve cooling of the motors. Simulations based on thermodynamic calculations as well as measurements on real actuators show that, under the assumption of the load of a soccer game, cooling can be considerably improved with relatively small amounts of water.",
"title": ""
},
{
"docid": "aaf075f849b4e61f57aa2451cdccad70",
"text": "The spatial relation between mitochondria and endoplasmic reticulum (ER) in living HeLa cells was analyzed at high resolution in three dimensions with two differently colored, specifically targeted green fluorescent proteins. Numerous close contacts were observed between these organelles, and mitochondria in situ formed a largely interconnected, dynamic network. A Ca2+-sensitive photoprotein targeted to the outer face of the inner mitochondrial membrane showed that, upon opening of the inositol 1,4,5-triphosphate (IP3)-gated channels of the ER, the mitochondrial surface was exposed to a higher concentration of Ca2+ than was the bulk cytosol. These results emphasize the importance of cell architecture and the distribution of organelles in regulation of Ca2+ signaling.",
"title": ""
},
{
"docid": "c10bd86125db702e0839e2a3776e195b",
"text": "To solve the big topic modeling problem, we need to reduce both time and space complexities of batch latent Dirichlet allocation (LDA) algorithms. Although parallel LDA algorithms on the multi-processor architecture have low time and space complexities, their communication costs among processors often scale linearly with the vocabulary size and the number of topics, leading to a serious scalability problem. To reduce the communication complexity among processors for a better scalability, we propose a novel communication-efficient parallel topic modeling architecture based on power law, which consumes orders of magnitude less communication time when the number of topics is large. We combine the proposed communication-efficient parallel architecture with the online belief propagation (OBP) algorithm referred to as POBP for big topic modeling tasks. Extensive empirical results confirm that POBP has the following advantages to solve the big topic modeling problem: 1) high accuracy, 2) communication-efficient, 3) fast speed, and 4) constant memory usage when compared with recent state-of-the-art parallel LDA algorithms on the multi-processor architecture. Index Terms —Big topic modeling, latent Dirichlet allocation, communication complexity, multi-processor architecture, online belief propagation, power law.",
"title": ""
},
{
"docid": "3963e1a10366748bf4e52d34cc15cc0f",
"text": "Surface electromyography (sEMG) is widely used in clinical diagnosis, rehabilitation engineering and humancomputer interaction and other fields. In this paper, we use Myo armband to collect sEMG signals. Myo armband can be worn above any elbow of any arm and it can capture the bioelectric signal generated when the arm muscles move. MYO can pass of signals through its low-power Blue-tooth, and its interference is small, which makes the signal quality really good. By collecting the sEMG signals of the upper limb forearm, we extract five eigenvalues in the time domain, and use the BP neural network classification algorithm to realize the recognition of six gestures in this paper. Experimental results show that the use of MYO for gesture recognition can get a very good recognition results, it can accurately identify the six hand movements with the average recognition rate of 93%.",
"title": ""
},
{
"docid": "224287bfe0a3f7b3236b442748a59cff",
"text": "Interactive image processing techniques, along with a linear-programming-based inductive classiier, have been used to create a highly accurate system for diagnosis of breast tumors. A small fraction of a ne needle aspirate slide is selected and digitized. With an interactive interface, the user initializes active contour models, known as snakes, near the boundaries of a set of cell nuclei. The customized snakes are deformed to the exact shape of the nuclei. This allows for precise, automated analysis of nuclear size, shape and texture. Ten such features are computed for each nucleus, and the mean value, largest (or \\worst\") value and standard error of each feature are found over the range of isolated cells. After 569 images were analyzed in this fashion, diierent combinations of features were tested to nd those which best separate benign from malignant samples. Tenfold cross-validation accuracy of 97% was achieved using a single separating plane on three of the thirty features: mean texture, worst area and worst smoothness. This represents an improvement over the best diagnostic results in the medical literature. The system is currently in use at the University of Wisconsin Hospitals. The same feature set has also been utilized in the much more diicult task of predicting distant recurrence of malignancy in patients, resulting in an accuracy of 86%.",
"title": ""
},
{
"docid": "09b008daecc4cab2de39f2e51ff11586",
"text": "Mondor's disease is a rare, self-limiting, benign process with acute presentation characterized by subcutaneous bands in several parts of the body. Penile Mondor's disease (PMD) is thrombophlebitis of the superficial dorsal vein of the penis. It is usually considered as thrombophlebitis or phlebitis of subcutaneous vessels. Some findings suggest that it might be of lymphatic origin. The chest, abdominal wall, penis, upper arm, and other parts of the body may also be involved by the disease. Although its physiopathology is not exactly known, transection of the vessel during surgery or any type of trauma such as external compression may trigger its possible development. This disease almost always limits itself. It may be associated with psychological distress and sexual incompatibility. The patients usually feel the superficial vein of the penis like a hard rope and present with complaint of pain around this hardness. Diagnosis is usually easy with physical examination but color Doppler ultrasound examination is important for differential diagnosis. Thus, a close collaboration is required between radiologist and urologist in order to determine the correct diagnosis and appropriate therapies.",
"title": ""
},
{
"docid": "cf26167180275d4feaca5c56afd0ffb1",
"text": "The polycystic ovary syndrome (PCOS) is defined as a combination of hyperandrogenism (hirsutism and acne) and anovulation (oligomenorrhea, infertility, and dysfunctional uterine bleeding), with or without the presence of polycystic ovaries on ultrasound. It represents the main endocrine disorder in the reproductive age, affecting 6% 15% of women in menacme. It is the most common cause of infertility due to anovulation, and the main source of female infertility. When in the presence of a menstrual disorder, the diagnosis of PCOS is reached in 30% 40% of patients with primary or secondary amenorrhoea and in 80% of patients with oligomenorrhea. PCOS should be diagnosed and treated early in adolescence due to reproductive, metabolic and oncological complications which may be associated with it. Treatment options include drugs, diet and lifestyle improvement.",
"title": ""
},
{
"docid": "75cb5c4c9c122d6e80419a3ceb99fd67",
"text": "Indonesian clove cigarettes (kreteks), typically have the appearance of a conventional domestic cigarette. The unique aspects of kreteks are that in addition to tobacco they contain dried clove buds (15-40%, by wt.), and are flavored with a proprietary \"sauce\". Whereas the clove buds contribute to generating high levels of eugenol in the smoke, the \"sauce\" may also contribute other potentially harmful constituents in addition to those associated with tobacco use. We measured levels of eugenol, trans-anethole (anethole), and coumarin in smoke from 33 brands of clove-flavored cigarettes (filtered and unfiltered) from five kretek manufacturers. In order to provide information for evaluating the delivery of these compounds under standard smoking conditions, a quantification method was developed for their measurement in mainstream cigarette smoke. The method allowed collection of mainstream cigarette smoke particulate matter on a Cambridge filter pad, extraction with methanol, sampling by automated headspace solid-phase microextraction, and subsequent analysis using gas chromatography/mass spectrometry. The presence of these compounds was confirmed in the smoke of kreteks using mass spectral library matching, high-resolution mass spectrometry (+/-0.0002 amu), and agreement with a relative retention time index, and native standards. We found that when kreteks were smoked according to standardized machine smoke parameters as specified by the International Standards Organization, all 33 clove brands contained levels of eugenol ranging from 2,490 to 37,900 microg/cigarette (microg/cig). Anethole was detected in smoke from 13 brands at levels of 22.8-1,030 microg/cig, and coumarin was detected in 19 brands at levels ranging from 9.2 to 215 microg/cig. These detected levels are significantly higher than the levels found in commercial cigarette brands available in the United States.",
"title": ""
},
{
"docid": "9b5207fc5beec8d2094d214cf8bfbded",
"text": "We present a novel model for the task of joint mention extraction and classification. Unlike existing approaches, our model is able to effectively capture overlapping mentions with unbounded lengths. The model is highly scalable, with a time complexity that is linear in the number of words in the input sentence and linear in the number of possible mention classes. Our model can be extended to additionally capture mention heads explicitly in a joint manner under the same time complexity. We demonstrate the effectiveness of our model through extensive experiments on standard datasets.",
"title": ""
},
{
"docid": "c508f62dfd94d3205c71334638790c54",
"text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).",
"title": ""
},
{
"docid": "ce167e13e5f129059f59c8e54b994fd4",
"text": "Critical research has emerged as a potentially important stream in information systems research, yet the nature and methods of critical research are still in need of clarification. While criteria or principles for evaluating positivist and interpretive research have been widely discussed, criteria or principles for evaluating critical social research are lacking. Therefore, the purpose of this paper is to propose a set of principles for the conduct of critical research. This paper has been accepted for publication in MIS Quarterly and follows on from an earlier piece that suggested a set of principles for interpretive research (Klein and Myers, 1999). The co-author of this paper is Heinz Klein.",
"title": ""
},
{
"docid": "427c5f5825ca06350986a311957c6322",
"text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.",
"title": ""
},
{
"docid": "883042a6004a5be3865da51da20fa7c9",
"text": "Green Mining is a field of MSR that studies software energy consumption and relies on software performance data. Unfortunately there is a severe lack of publicly available software power use performance data. This means that green mining researchers must generate this data themselves by writing tests, building multiple revisions of a product, and then running these tests multiple times (10+) for each software revision while measuring power use. Then, they must aggregate these measurements to estimate the energy consumed by the tests for each software revision. This is time consuming and is made more difficult by the constraints of mobile devices and their OSes. In this paper we propose, implement, and demonstrate Green Miner: the first dedicated hardware mining software repositories testbed. The Green Miner physically measures the energy consumption of mobile devices (Android phones) and automates the testing of applications, and the reporting of measurements back to developers and researchers. The Green Miner has already produced valuable results for commercial Android application developers, and has been shown to replicate other power studies' results.",
"title": ""
},
{
"docid": "4a21e3015f4fb63f25fd214eaa68ed87",
"text": "We describe our submission to the Brain Tumor Segmentation Challenge (BraTS) at MICCAI 2013. This segmentation approach is based on similarities between multi-channel patches. After patches are extracted from several MR channels for a test case, similar patches are found in training images for which label maps are known. These labels maps are then combined to result in a segmentation map for the test case. The labelling is performed, in a leave-one-out scheme, for each case of a publicly available training set, which consists of 30 real cases (20 highgrade gliomas, 10 low-grade gliomas) and 50 synthetic cases (25 highgrade gliomas, 25 low-grade gliomas). Promising results are shown on the training set, and we believe this algorithm would perform favourably well in comparison to the state of the art on a testing set.",
"title": ""
},
{
"docid": "d8b3cd4a65e02e451c020319fc091cfa",
"text": "This paper describes an experiment in which we try to automatically correct mistakes in grammatical agreement in English to Czech MT outputs. We perform several rule-based corrections on sentences parsed to dependency trees. We prove that it is possible to improve the MT quality of majority of the systems participating in WMT shared task. We made both automatic (BLEU) and manual evaluations.",
"title": ""
},
{
"docid": "3b9df74123b17342b6903120c16242e3",
"text": "Surgical eyebrow lift has been described by using many different open and endoscopic methods. Difficult techniques and only short time benefits oft lead to patients' complaints. We present a safe and simple temporal Z-incision technique for eyebrow lift in 37 patients. Besides simplicity and safety, our technique shows long lasting aesthetic results with hidden scars and a high rate of patient satisfaction.",
"title": ""
}
] |
scidocsrr
|
82ca8e9281cf37aa08ebe53a36663298
|
Using Personality Information in Collaborative Filtering for New Users
|
[
{
"docid": "f415b38e6d43c8ed81ce97fd924def1b",
"text": "Collaborative filtering is one of the most successful and widely used methods of automated product recommendation in online stores. The most critical component of the method is the mechanism of finding similarities among users using product ratings data so that products can be recommended based on the similarities. The calculation of similarities has relied on traditional distance and vector similarity measures such as Pearson’s correlation and cosine which, however, have been seldom questioned in terms of their effectiveness in the recommendation problem domain. This paper presents a new heuristic similarity measure that focuses on improving recommendation performance under cold-start conditions where only a small number of ratings are available for similarity calculation for each user. Experiments using three different datasets show the superiority of the measure in new user cold-start conditions. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "f25aef35500ed74e5ef41d5e45d2e2df",
"text": "With recommender systems, users receive items recommended on the basis of their profile. New users experience the cold start problem: as their profile is very poor, the system performs very poorly. In this paper, classical new user cold start techniques are improved by exploiting the cold user data, i.e. the user data that is readily available (e.g. age, occupation, location, etc.), in order to automatically associate the new user with a better first profile. Relying on the existing α-community spaces model, a rule-based induction process is used and a recommendation process based on the \"level of agreement\" principle is defined. The experiments show that the quality of recommendations compares to that obtained after a classical new user technique, while the new user effort is smaller as no initial ratings are asked.",
"title": ""
}
] |
[
{
"docid": "75fda2fa6c35c915dede699c12f45d84",
"text": "This work presents an open-source framework called systemc-clang for analyzing SystemC models that consist of a mixture of register-transfer level, and transaction-level components. The framework statically parses mixed-abstraction SystemC models, and represents them using an intermediate representation. This intermediate representation captures the structural information about the model, and certain behavioural semantics of the processes in the model. This representation can be used for multiple purposes such as static analysis of the model, code transformations, and optimizations. We describe with examples, the key details in implementing systemc-clang, and show an example of constructing a plugin that analyzes the intermediate representation to discover opportunities for parallel execution of SystemC processes. We also experimentally evaluate the capabilities of this framework with a subset of examples from the SystemC distribution including register-transfer, and transaction-level models.",
"title": ""
},
{
"docid": "b59a2c49364f3e95a2c030d800d5f9ce",
"text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.",
"title": ""
},
{
"docid": "f80430c36094020991f167aeb04f21e0",
"text": "Participants in recent discussions of AI-related issues ranging from intelligence explosion to technological unemployment have made diverse claims about the nature, pace, and drivers of progress in AI. However, these theories are rarely specified in enough detail to enable systematic evaluation of their assumptions or to extrapolate progress quantitatively, as is often done with some success in other technological domains. After reviewing relevant literatures and justifying the need for more rigorous modeling of AI progress, this paper contributes to that research program by suggesting ways to account for the relationship between hardware speed increases and algorithmic improvements in AI, the role of human inputs in enabling AI capabilities, and the relationships between different sub-fields of AI. It then outlines ways of tailoring AI progress models to generate insights on the specific issue of technological unemployment, and outlines future directions for research on AI progress.",
"title": ""
},
{
"docid": "1d3379e5e70d1fb7fa050c42805fe865",
"text": "While many recent hand pose estimation methods critically rely on a training set of labelled frames, the creation of such a dataset is a challenging task that has been overlooked so far. As a result, existing datasets are limited to a few sequences and individuals, with limited accuracy, and this prevents these methods from delivering their full potential. We propose a semi-automated method for efficiently and accurately labeling each frame of a hand depth video with the corresponding 3D locations of the joints: The user is asked to provide only an estimate of the 2D reprojections of the visible joints in some reference frames, which are automatically selected to minimize the labeling work by efficiently optimizing a sub-modular loss function. We then exploit spatial, temporal, and appearance constraints to retrieve the full 3D poses of the hand over the complete sequence. We show that this data can be used to train a recent state-of-the-art hand pose estimation method, leading to increased accuracy.",
"title": ""
},
{
"docid": "2a1c3f87821e47f5c32d10cb80505dcb",
"text": "We are developing a cardiac pacemaker with a small, cylindrical shape that permits percutaneous implantation into a fetus to treat complete heart block and consequent hydrops fetalis, which can otherwise be fatal. The device uses off-the-shelf components including a rechargeable lithium cell and a highly efficient relaxation oscillator encapsulated in epoxy and glass. A corkscrew electrode made from activated iridium can be screwed into the myocardium, followed by release of the pacemaker and a short, flexible lead entirely within the chest of the fetus to avoid dislodgement from fetal movement. Acute tests in adult rabbits demonstrated the range of electrical parameters required for successful pacing and the feasibility of successfully implanting the device percutaneously under ultrasonic imaging guidance. The lithium cell can be recharged inductively as needed, as indicated by a small decline in the pulsing rate.",
"title": ""
},
{
"docid": "9beaf6c7793633dceca0c8df775e8959",
"text": "The course, antecedents, and implications for social development of effortful control were examined in this comprehensive longitudinal study. Behavioral multitask batteries and parental ratings assessed effortful control at 22 and 33 months (N = 106). Effortful control functions encompassed delaying, slowing down motor activity, suppressing/initiating activity to signal, effortful attention, and lowering voice. Between 22 and 33 months, effortful control improved considerably, its coherence increased, it was stable, and it was higher for girls. Behavioral and parent-rated measures converged. Children's focused attention at 9 months, mothers' responsiveness at 22 months, and mothers' self-reported socialization level all predicted children's greater effortful control. Effortful control had implications for concurrent social development. Greater effortful control at 22 months was linked to more regulated anger, and at 33 months, to more regulated anger and joy and to stronger restraint.",
"title": ""
},
{
"docid": "808115043786372af3e3fb726cc3e191",
"text": "Scapy is a free and open source packet manipulation environment written in Python language. In this paper we present a Modbus extension to Scapy, and show how this environment can be used to build tools for security analysis of industrial network protocols. Our implementation can be extended to other industrial network protocols and can help security analysts to understand how these protocols work under attacks or adverse conditions.",
"title": ""
},
{
"docid": "3a68936b77a49f8deeaaadee762a3435",
"text": "Online service quality is one of the key determinants of the success of online retailers. This exploratory study revealed some important findings about online service quality. First, the study identified six key online retailing service quality dimensions as perceived by online customers: reliable/prompt responses, access, ease of use, attentiveness, security, and credibility. Second, of the six, three dimensions, notably reliable/prompt responses, attentiveness, and ease of use, had significant impacts on both customers’ perceived overall service quality and their satisfaction. Third, the access dimension had a significant effect on overall service quality, but not on satisfaction. Finally, this study discovered a significantly positive relationship between overall service quality and satisfaction. Important managerial implications and recommendations are also presented.",
"title": ""
},
{
"docid": "87e44334828cd8fd1447ab5c1b125ab3",
"text": "the guidance system. The types of steering commands vary depending on the phase of flight and the type of interceptor. For example, in the boost phase the flight control system may be designed to force the missile to track a desired flight-path angle or attitude. In the midcourse and terminal phases the system may be designed to track acceleration commands to effect an intercept of the target. This article explores several aspects of the missile flight control system, including its role in the overall missile system, its subsystems, types of flight control systems, design objectives, and design challenges. Also discussed are some of APL’s contributions to the field, which have come primarily through our role as Technical Direction Agent on a variety of Navy missile programs. he flight control system is a key element that allows the missile to meet its system performance requirements. The objective of the flight control system is to force the missile to achieve the steering commands developed by",
"title": ""
},
{
"docid": "ecea888d3b2d6b9ce0a26a4af6382db8",
"text": "Business Process Management (BPM) research resulted in a plethora of methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. This survey aims to structure these results and provides an overview of the state-of-the-art in BPM. In BPM the concept of a process model is fundamental. Process models may be used to configure information systems, but may also be used to analyze, understand, and improve the processes they describe. Hence, the introduction of BPM technology has both managerial and technical ramifications, and may enable significant productivity improvements, cost savings, and flow-time reductions. The practical relevance of BPM and rapid developments over the last decade justify a comprehensive survey.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "a500af4d27774a3f36db90a79dec91c3",
"text": "This paper introduces Internet of Things (IoTs), which offers capabilities to identify and connect worldwide physical objects into a unified system. As a part of IoTs, serious concerns are raised over access of personal information pertaining to device and individual privacy. This survey summarizes the security threats and privacy concerns of IoT..",
"title": ""
},
{
"docid": "372b2aa9810ec12ebf033632cffd5739",
"text": "A simple CFD tool, coupled to a discrete surface representation and a gradient-based optimization procedure, is applied to the design of optimal hull forms and optimal arrangement of hulls for a wave cancellation multihull ship. The CFD tool, which is used to estimate the wave drag, is based on the zeroth-order slender ship approximation. The hull surface is represented by a triangulation, and almost every grid point on the surface can be used as a design variable. A smooth surface is obtained via a simplified pseudo-shell problem. The optimal design process consists of two steps. The optimal center and outer hull forms are determined independently in the first step, where each hull keeps the same displacement as the original design while the wave drag is minimized. The optimal outer-hull arrangement is determined in the second step for the optimal center and outer hull forms obtained in the first step. Results indicate that the new design can achieve a large wave drag reduction in comparison to the original design configuration.",
"title": ""
},
{
"docid": "6559d77de48d153153ce77b0e2969793",
"text": "1 This paper is an invited chapter to be published in the Handbooks in Operations Research and Management Science: Supply Chain Management, edited by Steve Graves and Ton de Kok and published by North-Holland. I would like to thank the many people that carefully read and commented on the ...rst draft of this manuscript: Ravi Anupindi, Fangruo Chen, Charles Corbett, James Dana, Ananth Iyer, Ton de Kok, Yigal Gerchak, Mark Ferguson, Marty Lariviere, Serguei Netessine, Ediel Pinker, Nils Rudi, Sridhar Seshadri, Terry Taylor and Kevin Weng. I am, of course, responsible for all remaining errors. Comments, of course, are still quite welcomed.",
"title": ""
},
{
"docid": "08b8c184ff2230b0df2c0f9b4e3f7840",
"text": "We present an augmented reality magic mirror for teaching anatomy. The system uses a depth camera to track the pose of a user standing in front of a large display. A volume visualization of a CT dataset is augmented onto the user, creating the illusion that the user can look into his body. Using gestures, different slices from the CT and a photographic dataset can be selected for visualization. In addition, the system can show 3D models of organs, text information and images about anatomy. For interaction with this data we present a new interaction metaphor that makes use of the depth camera. The visibility of hands and body is modified based on the distance to a virtual interaction plane. This helps the user to understand the spatial relations between his body and the virtual interaction plane.",
"title": ""
},
{
"docid": "67ea7e099e60379042e6656897b0fbc3",
"text": "This article describes a case study on MuseUs, a pervasive serious game for use in museums, running as a smartphone app. During the museum visit, players are invited to create their own exposition and are guided by the application in doing so. The aim is to provide a learning effect during a visit to a museum exhibition. Central to the MuseUs experience is that it does not necessitate a predefined path trough the museum and that it does not draw the attention away from the exposition itself. Also, the application stimulates the visitor to look at cultural heritage elements in a different way, permitting the construction of personal narratives while creating a personal exposition. Using a methodology derived from action research, we present recommendations for the design of similar applications and conclude by proposing a high-level architecture for pervasive serious games applied to cultural heritage.",
"title": ""
},
{
"docid": "1ab0308539bc6508b924316b39a963ca",
"text": "Daily wafer fabrication in semiconductor foundry depends on considerable metrology operations for tool-quality and process-quality assurance. The metrology operations required a lot of metrology tools, which increase FAB's investment. Also, these metrology operations will increase cycle time of wafer process. Metrology operations do not bring any value added to wafer but only quality assurance. This article provides a new method denoted virtual metrology (VM) to utilize sensor data collected from 300 mm FAB's tools to forecast quality data of wafers and tools. This proposed method designs key steps to establish a VM control model based on neural networks and to develop and deploy applications following SEMI EDA (equipment data acquisition) standards.",
"title": ""
},
{
"docid": "2319e5f20b03abe165b7715e9b69bac5",
"text": "Cloud networking imposes new requirements in terms of connection resiliency and throughput among virtual machines, hypervisors and users. A promising direction is to exploit multipath communications, yet existing protocols have a so limited scope that performance improvements are often unreachable. Generally, multipathing adds signaling overhead and in certain conditions may in fact decrease throughput due to packet arrival disorder. At the transport layer, the most promising protocol is Multipath TCP (MPTCP), a backward compatible TCP extension allowing to balance the load on several TCP subflows, ideally following different physical paths, to maximize connection throughput. Current implementations create a full mesh between hosts IPs, which can be suboptimal. For situation when at least one end-point network is multihomed, we propose to enhance its subflow creation mechanism so that MPTCP creates an adequate number of subflows considering the underlying path diversity offered by an IP-in-IP mapping protocol, the Location/Identifier Separation Protocol (LISP). We defined and implemented a cross-layer cooperation module between MPTCP and LISP, leading to an improved version of MPTCP we name Augmented MPTCP (A-MPTCP). We evaluated A-MPTCP for a realistic Cloud access use-case scenario involving one multi-homed data-center. Results from a large-scale test bed show us that A-MPTCP can halve the transfer times with the simple addition of one additional LIS-Penabled MPTCP subflow, hence showing promising performance for Cloud communications between multi-homed users and multihomed data-centers.",
"title": ""
},
{
"docid": "26d0809a2c8ab5d5897ca43c19fc2b57",
"text": "This study outlines a simple 'Profilometric' method for measuring the size and function of the wrinkles. Wrinkle size was measured in relaxed conditions and the representative parameters were considered to be the mean 'Wrinkle Depth', the mean 'Wrinkle Area', the mean 'Wrinkle Volume', and the mean 'Wrinkle Tissue Reservoir Volume' (WTRV). These parameters were measured in the wrinkle profiles under relaxed conditions. The mean 'Wrinkle to Wrinkle Distance', which measures the distance between two adjacent wrinkles, is an accurate indicator of the muscle relaxation level during replication. This parameter, identified as the 'Muscle Relaxation Level Marker', and its reduction are related to increased muscle tone or contraction and vice versa. The mean Wrinkle to Wrinkle Distance is very important in experiments where the effectiveness of an anti-wrinkle preparation is tested. Thus, the correlative wrinkles' replicas, taken during follow up in different periods, are only those that show the same mean Wrinkle to Wrinkle Distance. The wrinkles' functions were revealed by studying the morphological changes of the wrinkles and their behavior during relaxed conditions, under slight increase of muscle tone and under maximum wrinkling. Facial wrinkles are not a single groove, but comprise an anatomical and functional unit (the 'Wrinkle Unit') along with the surrounding skin. This Wrinkle Unit participates in the functions of a central neuro-muscular system of the face responsible for protection, expression, and communication. Thus, the Wrinkle Unit, the superficial musculoaponeurotic system (superficial fascia of the face), the underlying muscles controlled by the CNS and Psyche, are considered to be a 'Functional Psycho-Neuro-Muscular System of the Face for Protection, Expression and Communication'. The three major functions of this system exerted in the central part of the face and around the eyes are: (1) to open and close the orifices (eyes, nose, and mouth), contributing to their functions; (2) to protect the eyes from sun, foreign bodies, etc.; (3) to contribute to facial expression, reflecting emotions (real, pretended, or theatrical) during social communication. These functions are exercised immediately and easily, without any opposition ('Wrinkling Ability') because of the presence of the Wrinkle Unit that gives (a) the site of refolding (the wrinkle is a waiting fold, ready to respond quickly at any moment for any skin mobility need) and (b) the appropriate skin tissue for extension or compression (this reservoir of tissue is measured by the parameter of WTRV). The Wrinkling Ability of a skin area is linked to the wrinkle's functions and can be measured by the parameter of 'Skin Tissue Volume Compressed around the Wrinkle' in mm(3) per 30 mm wrinkle during maximum wrinkling. The presence of wrinkles is a sign that the skin's 'Recovery Ability' has declined progressively with age. The skin's Recovery Ability is linked to undesirable cosmetic effects of ageing and wrinkling. This new Profilometric method can be applied in studies where the effectiveness of anti-wrinkle preparations or the cosmetic results of surgery modalities are tested, as well as in studies focused on the functional physiology of the Wrinkle Unit.",
"title": ""
}
] |
scidocsrr
|
6b0914a5e35da6d821753f2e7f3fa3cc
|
Constructing Unrestricted Adversarial Examples with Generative Models
|
[
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "3a7f3e75a5d534f6475c40204ba2403f",
"text": "In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images and, at inference time, finds a close output to a given image. This output will not contain the adversarial changes and is fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies.",
"title": ""
},
{
"docid": "88a1549275846a4fab93f5727b19e740",
"text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"title": ""
}
] |
[
{
"docid": "6042afa9c75aae47de19b80ece21932c",
"text": "In this paper, a fault diagnostic system in a multilevel-inverter using a neural network is developed. It is difficult to diagnose a multilevel-inverter drive (MLID) system using a mathematical model because MLID systems consist of many switching devices and their system complexity has a nonlinear factor. Therefore, a neural network classification is applied to the fault diagnosis of a MLID system. Five multilayer perceptron (MLP) networks are used to identify the type and location of occurring faults from inverter output voltage measurement. The neural network design process is clearly described. The classification performance of the proposed network between normal and abnormal condition is about 90%, and the classification performance among fault features is about 85%. Thus, by utilizing the proposed neural network fault diagnostic system, a better understanding about fault behaviors, diagnostics, and detections of a multilevel inverter drive system can be accomplished. The results of this analysis are identified in percentage tabular form of faults and switch locations",
"title": ""
},
{
"docid": "96d8971bf4a8d18f4471019796348e1b",
"text": "Most wired active electrodes reported so far have a gain of one and require at least three wires. This leads to stiff cables, large connectors and additional noise for the amplifier. The theoretical advantages of amplifying the signal on the electrodes right from the source has often been described, however, rarely implemented. This is because a difference in the gain of the electrodes due to component tolerances strongly limits the achievable common mode rejection ratio (CMRR). In this paper, we introduce an amplifier for bioelectric events where the major part of the amplification (40 dB) is achieved on the electrodes to minimize pick-up noise. The electrodes require only two wires of which one can be used for shielding, thus enabling smaller connecters and smoother cables. Saturation of the electrodes is prevented by a dc-offset cancelation scheme with an active range of /spl plusmn/250 mV. This error feedback simultaneously allows to measure the low frequency components down to dc. This enables the measurement of slow varying signals, e.g., the change of alertness or the depolarization before an epileptic seizure normally not visible in a standard electroencephalogram (EEG). The amplifier stage provides the necessary supply current for the electrodes and generates the error signal for the feedback loop. The amplifier generates a pseudodifferential signal where the amplified bioelectric event is present on one lead, but the common mode signal is present on both leads. Based on the pseudodifferential signal we were able to develop a new method to compensate for a difference in the gain of the active electrodes which is purely software based. The amplifier system is then characterized and the input referred noise as well as the CMRR are measured. For the prototype circuit the CMRR evaluated to 78 dB (without the driven-right-leg circuit). The applicability of the system is further demonstrated by the recording of an ECG.",
"title": ""
},
{
"docid": "ae1109343879d05eaa4b524e4f5d92f3",
"text": "Implantable devices, often dependent on software, save countless lives. But how secure are they?",
"title": ""
},
{
"docid": "06731beb8a4563ed89338b4cba88d1df",
"text": "It has been almost five years since the ISO adopted a standard for measurement of image resolution of digital still cameras using slanted-edge gradient analysis. The method has also been applied to the spatial frequency response and MTF of film and print scanners, and CRT displays. Each of these applications presents challenges to the use of the method. Previously, we have described causes of both bias and variation error in terms of the various signal processing steps involved. This analysis, when combined with observations from practical systems testing, has suggested improvements and interpretation of results. Specifically, refinements in data screening for signal encoding problems, edge feature location and slope estimation, and noise resilience will be addressed.",
"title": ""
},
{
"docid": "f7276b8fee4bc0633348ce64594817b2",
"text": "Meta-modelling is at the core of Model-Driven Engineering, where it is used for language engineering and domain modelling. The OMG’s Meta-Object Facility is the standard framework for building and instantiating meta-models. However, in the last few years, several researchers have identified limitations and rigidities in such scheme, most notably concerning the consideration of only two meta-modelling levels at the same time. In this paper we present MetaDepth, a novel framework that supports a dual linguistic/ontological instantiation and permits building systems with an arbitrary number of meta-levels through deep meta-modelling. The framework implements advanced modelling concepts allowing the specification and evaluation of derived attributes and constraints across multiple meta-levels, linguistic extensions of ontological instance models, transactions, and hosting different constraint and action languages.",
"title": ""
},
{
"docid": "8ffaf2a272bc7e52baf3443e9fcd136d",
"text": "Maturity models have become a common tool for organisations to assess their capabilities in a variety of domains. However, for fields that have not yet been researched thoroughly, it can be difficult to create and evolve a maturity model that features all the important aspects in that field. It takes time and many iterative improvements for a maturity model to come of age. This is the case for Green ICT maturity models, whose aim is typically to either provide insight on the important aspects an organisation or a researcher should take into account when trying to improve the social or environmental impact of ICT, or to assist in the auditing of such aspects. In fact, when we were commissioned a comprehensive ICT-sustainability auditing for Utrecht University, we not only faced the need of selecting a Green ICT maturity model, but also to ensure that it covered as many organisational aspects as possible, extending the model if needed. This paper reports on the comparison we carried out of several Green ICT maturity models, how we extended our preferred model with needed constructs, and how we applied the resulting model during the ICT-sustainability auditing.",
"title": ""
},
{
"docid": "6834abfb692dbfe6d629f4153a873d85",
"text": "Wikidata is a free and open knowledge base from the Wikimedia Foundation, that not only acts as a central storage of structured data for other projects of the organization, but also for a growing array of information systems, including search engines. Like Wikipedia, Wikidata’s content can be created and edited by anyone; which is the main source of its strength, but also allows for malicious users to vandalize it, risking the spreading of misinformation through all the systems that rely on it as a source of structured facts. Our task at the WSDM Cup 2017 was to come up with a fast and reliable prediction system that narrows down suspicious edits for human revision [8]. Elaborating on previous works by Heindorf et al. we were able to outperform all other contestants, while incorporating new interesting features, unifying the programming language used to only Python and refactoring the feature extractor into a simpler and more compact code base.",
"title": ""
},
{
"docid": "5ecde325c3d01dc62bc179bc21fc8a0d",
"text": "Rapid access to situation-sensitive data through social media networks creates new opportunities to address a number of real-world problems. Damage assessment during disasters is a core situational awareness task for many humanitarian organizations that traditionally takes weeks and months. In this work, we analyze images posted on social media platforms during natural disasters to determine the level of damage caused by the disasters. We employ state-of-the-art machine learning techniques to perform an extensive experimentation of damage assessment using images from four major natural disasters. We show that the domain-specific fine-tuning of deep Convolutional Neural Networks (CNN) outperforms other state-of-the-art techniques such as Bag-of-Visual-Words (BoVW). High classification accuracy under both event-specific and cross-event test settings demonstrate that the proposed approach can effectively adapt deep-CNN features to identify the severity of destruction from social media images taken after a disaster strikes.",
"title": ""
},
{
"docid": "d78c6c3ed642e04263de583f2bbebcf8",
"text": "This letter presents an omnidirectional horizontally polarized planar printed loop-antenna using left-handed CL loading with 50-Omega input impedance. The antenna has a one wavelength circumference and gives an omnidirectional pattern in the plane of the loop, whilst working in an n = 0 mode. In contrast, a conventional right-handed loop, with the same dimensions, has a figure of eight pattern in the plane of the loop. The antenna is compared with other right-handed periodically loading loop antennas and shown to have the best efficiency and is much easier to match. Design details and simulated results are presented. The concept significantly extends the design degrees of freedom for loop antennas.",
"title": ""
},
{
"docid": "6d21d5da7cd3bf0a52b57307831f08d2",
"text": "This paper presents a broadband, dual-polarized base station antenna element by reasonably designing two orthogonal symmetrical dipole, four loading cylinders, balun, feed patches, specific shape reflector and plastic fasteners. Coupling feed is adopted to avoid the direct connection between the feed cables and the dipoles. The antenna element matches well in the frequency range of 1.7-2.7 GHz and the return loss (RL) S11<;-15 dB and the isolation S21<; -30 dB. Low cross-polarization, high front-back ratio (>25dB) and stable half-power beam width (HPBW) with 65±5° are also achieved. The proposed antenna element covers the whole long term evolution (LTE) band and is backward compatible with 3G and 2G bands.",
"title": ""
},
{
"docid": "fc9193f15f6e96043271302be917f2c7",
"text": "In this article we introduce the main notions of our core ontology for the robotics and automation field, one of first results of the newly formed IEEE-RAS Working Group, named Ontologies for Robotics and Automation. It aims to provide a common ground for further ontology development in Robotics and Automation. Furthermore, we will discuss the main core ontology definitions as well as the ontology development process employed.",
"title": ""
},
{
"docid": "621d66aeff489c65eb9877270cb86b5f",
"text": "Electronic customer relationship management (e-CRM) emerges from the Internet and Web technology to facilitate the implementation of CRM. It focuses on Internet- or Web-based interaction between companies and their customers. Above all, e-CRM enables service sectors to provide appropriate services and products to satisfy the customers so as to retain customer royalty and enhance customer profitability. This research is to explore the key research issues about e-CRM performance influence for service sectors in Taiwan. A research model is proposed based on the widely applied technology-organization-environment (TOE) framework. Survey data from the questionnaire are collected to empirically assess our research model.",
"title": ""
},
{
"docid": "ca0f1c0be79d9993ea94f77fd46c0921",
"text": "We have established methods to evaluate key properties that are needed to commercialize polyelectrolyte membranes for fuel cell electric vehicles such as water diffusion, gas permeability, and mechanical strength. These methods are based on coarse-graining models. For calculating water diffusion and gas permeability through the membranes, the dissipative particle dynamics–Monte Carlo approach was applied, while mechanical strength of the hydrated membrane was simulated by coarse-grained molecular dynamics. As a result of our systematic search and analysis, we can now grasp the direction necessary to improve water diffusion, gas permeability, and mechanical strength. For water diffusion, a map that reveals the relationship between many kinds of molecular structures and diffusion constants was obtained, in which the direction to enhance the diffusivity by improving membrane structure can be clearly seen. In order to achieve high mechanical strength, the molecular structure should be such that the hydrated membrane contains narrow water channels, but these might decrease the proton conductivity. Therefore, an optimal design of the polymer structure is needed, and the developed models reviewed here make it possible to optimize these molecular structures.",
"title": ""
},
{
"docid": "57af881dbb159dae0966472473539011",
"text": "We present in this paper our system developed for SemEval 2015 Shared Task 2 (2a English Semantic Textual Similarity, STS, and 2c Interpretable Similarity) and the results of the submitted runs. For the English STS subtask, we used regression models combining a wide array of features including semantic similarity scores obtained from various methods. One of our runs achieved weighted mean correlation score of 0.784 for sentence similarity subtask (i.e., English STS) and was ranked tenth among 74 runs submitted by 29 teams. For the interpretable similarity pilot task, we employed a rule-based approach blended with chunk alignment labeling and scoring based on semantic similarity features. Our system for interpretable text similarity was among the top three best performing systems.",
"title": ""
},
{
"docid": "8f1e3444c073a510df1594dc88d24b6b",
"text": "Purpose – The purpose of this paper is to provide industrial managers with insight into the real-time progress of running processes. The authors formulated a periodic performance prediction algorithm for use in a proposed novel approach to real-time business process monitoring. Design/methodology/approach – In the course of process executions, the final performance is predicted probabilistically based on partial information. Imputation method is used to generate probable progresses of ongoing process and Support Vector Machine classifies the performances of them. These procedures are periodically iterated along with the real-time progress in order to describe the ongoing status. Findings – The proposed approach can describe the ongoing status as the probability that the process will be executed continually and terminated as the identical result. Furthermore, before the actual occurrence, a proactive warning can be provided for implicit notification of eventualities if the probability of occurrence of the given outcome exceeds the threshold. Research limitations/implications – The performance of the proactive warning strategy was evaluated only for accuracy and proactiveness. However, the process will be improved by additionally considering opportunity costs and benefits from actual termination types and their warning errors. Originality/value – Whereas the conventional monitoring approaches only classify the already occurred result of a terminated instance deterministically, the proposed approach predicts the possible results of an ongoing instance probabilistically over entire monitoring periods. As such, the proposed approach can provide the real-time indicator describing the current capability of ongoing process.",
"title": ""
},
{
"docid": "07ff0274408e9ba5d6cd2b1a2cb7cbf8",
"text": "Though tremendous strides have been made in object recognition, one of the remaining open challenges is detecting small objects. We explore three aspects of the problem in the context of finding small faces: the role of scale invariance, image resolution, and contextual reasoning. While most recognition approaches aim to be scale-invariant, the cues for recognizing a 3px tall face are fundamentally different than those for recognizing a 300px tall face. We take a different approach and train separate detectors for different scales. To maintain efficiency, detectors are trained in a multi-task fashion: they make use of features extracted from multiple layers of single (deep) feature hierarchy. While training detectors for large objects is straightforward, the crucial challenge remains training detectors for small objects. We show that context is crucial, and define templates that make use of massively-large receptive fields (where 99% of the template extends beyond the object of interest). Finally, we explore the role of scale in pre-trained deep networks, providing ways to extrapolate networks tuned for limited scales to rather extreme ranges. We demonstrate state-of-the-art results on massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when compared to prior art on WIDER FACE, our results reduce error by a factor of 2 (our models produce an AP of 82% while prior art ranges from 29-64%).",
"title": ""
},
{
"docid": "79414d5ba6a202bf52d26a74caff4784",
"text": "The Co-Training algorithm uses unlabeled examples in multiple views to bootstrap classifiers in each view, typically in a greedy manner, and operating under assumptions of view-independence and compatibility. In this paper, we propose a Co-Regularization framework where classifiers are learnt in each view through forms of multi-view regularization. We propose algorithms within this framework that are based on optimizing measures of agreement and smoothness over labeled and unlabeled examples. These algorithms naturally extend standard regularization methods like Support Vector Machines (SVM) and Regularized Least squares (RLS) for multi-view semi-supervised learning, and inherit their benefits and applicability to high-dimensional classification problems. An empirical investigation is presented that confirms the promise of this approach.",
"title": ""
},
{
"docid": "63af822cd877b95be976f990b048f90c",
"text": "We propose a method for generating classifier ensembles based on feature extraction. To create the training data for a base classifier, the feature set is randomly split into K subsets (K is a parameter of the algorithm) and principal component analysis (PCA) is applied to each subset. All principal components are retained in order to preserve the variability information in the data. Thus, K axis rotations take place to form the new features for a base classifier. The idea of the rotation approach is to encourage simultaneously individual accuracy and diversity within the ensemble. Diversity is promoted through the feature extraction for each base classifier. Decision trees were chosen here because they are sensitive to rotation of the feature axes, hence the name \"forest\". Accuracy is sought by keeping all principal components and also using the whole data set to train each base classifier. Using WEKA, we examined the rotation forest ensemble on a random selection of 33 benchmark data sets from the UCI repository and compared it with bagging, AdaBoost, and random forest. The results were favorable to rotation forest and prompted an investigation into diversity-accuracy landscape of the ensemble models. Diversity-error diagrams revealed that rotation forest ensembles construct individual classifiers which are more accurate than these in AdaBoost and random forest, and more diverse than these in bagging, sometimes more accurate as well",
"title": ""
},
{
"docid": "3f8e6ebe83ba2d4bf3a1b4ab5044b6e4",
"text": "-This paper discusses the ways in which automation of industrial processes may expand rather than eliminate problems with the human operator. Some comments will be made on methods of alleviating these problems within the \"classic' approach of leaving the operator with responsibility for abnormal conditions, and on the potential for continued use of the human operator for on-line decision-making within human-computer collaboration. Irony: combination of circumstances, the result of which is the direct opposite of what might be expected. Paradox: seemingly absurd though perhaps really well-founded",
"title": ""
},
{
"docid": "eb59f239621dde59a13854c5e6fa9f54",
"text": "This paper presents a novel application of grammatical inference techniques to the synthesis of behavior models of software systems. This synthesis is used for the elicitation of software requirements. This problem is formulated as a deterministic finite-state automaton induction problem from positive and negative scenarios provided by an end-user of the software-to-be. A query-driven state merging algorithm (QSM) is proposed. It extends the RPNI and Blue-Fringe algorithms by allowing membership queries to be submitted to the end-user. State merging operations can be further constrained by some prior domain knowledge formulated as fluents, goals, domain properties, and models of external software components. The incorporation of domain knowledge both reduces the number of queries and guarantees that the induced model is consistent with such knowledge. The proposed techniques are implemented in the ISIS tool and practical evaluations on standard requirements engineering test cases and synthetic data illustrate the interest of this approach. Contact author: Pierre Dupont Department of Computing Science and Engineering (INGI) Université catholique de Louvain Place Sainte Barbe, 2. B-1348 Louvain-la-Neuve Belgium Email: [email protected] Phone: +32 10 47 91 14 Fax: +32 10 45 03 45",
"title": ""
}
] |
scidocsrr
|
a0a63f230fc0d5234904058c4dc87c23
|
Virtual Try-On Using Kinect and HD Camera
|
[
{
"docid": "02447ce33a1fa5f8b4f156abf5d2f746",
"text": "In this paper, we present TeleHuman, a cylindrical 3D display portal for life-size human telepresence. The TeleHuman 3D videoconferencing system supports 360 degree motion parallax as the viewer moves around the cylinder and optionally, stereoscopic 3D display of the remote person. We evaluated the effect of perspective cues on the conveyance of nonverbal cues in two experiments using a one-way telecommunication version of the system. The first experiment focused on how well the system preserves gaze and hand pointing cues. The second experiment evaluated how well the system conveys 3D body postural information. We compared 3 perspective conditions: a conventional 2D view, a 2D view with 360 degree motion parallax, and a stereoscopic view with 360 degree motion parallax. Results suggest the combined presence of motion parallax and stereoscopic cues significantly improved the accuracy with which participants were able to assess gaze and hand pointing cues, and to instruct others on 3D body poses. The inclusion of motion parallax and stereoscopic cues also led to significant increases in the sense of social presence and telepresence reported by participants.",
"title": ""
},
{
"docid": "d922dbcdd2fb86e7582a4fb78990990e",
"text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"title": ""
},
{
"docid": "03d33ceac54b501c281a954e158d0224",
"text": "HoloDesk is an interactive system combining an optical see through display and Kinect camera to create the illusion that users are directly interacting with 3D graphics. A virtual image of a 3D scene is rendered through a half silvered mirror and spatially aligned with the real-world for the viewer. Users easily reach into an interaction volume displaying the virtual image. This allows the user to literally get their hands into the virtual display and to directly interact with an spatially aligned 3D virtual world, without the need for any specialized head-worn hardware or input device. We introduce a new technique for interpreting raw Kinect data to approximate and track rigid (e.g., books, cups) and non-rigid (e.g., hands, paper) physical objects and support a variety of physics-inspired interactions between virtual and real. In particular the algorithm models natural human grasping of virtual objects with more fidelity than previously demonstrated. A qualitative study highlights rich emergent 3D interactions, using hands and real-world objects. The implementation of HoloDesk is described in full, and example application scenarios explored. Finally, HoloDesk is quantitatively evaluated in a 3D target acquisition task, comparing the system with indirect and glasses-based variants.",
"title": ""
}
] |
[
{
"docid": "1498977b6e68df3eeca6e25c550a5edd",
"text": "The Raven's Progressive Matrices (RPM) test is a commonly used test of intelligence. The literature suggests a variety of problem-solving methods for addressing RPM problems. For a graduate-level artificial intelligence class in Fall 2014, we asked students to develop intelligent agents that could address 123 RPM-inspired problems, essentially crowdsourcing RPM problem solving. The students in the class submitted 224 agents that used a wide variety of problem-solving methods. In this paper, we first report on the aggregate results of those 224 agents on the 123 problems, then focus specifically on four of the most creative, novel, and effective agents in the class. We find that the four agents, using four very different problem-solving methods, were all able to achieve significant success. This suggests the RPM test may be amenable to a wider range of problem-solving methods than previously reported. It also suggests that human computation might be an effective strategy for collecting a wide variety of methods for creative tasks.",
"title": ""
},
{
"docid": "9d195abaff4bdd283ba8e331501968fb",
"text": "These days, instructors in universities and colleges take the attendance manually either by calling out individual's name or by passing around an attendance sheet for student's signature to confirm his/her presence. Using these methods is both cumbersome and time-consuming. Therefore a method of taking attendance using instructor's mobile telephone has been presented in this paper which is paperless, quick, and accurate. An application software installed in the instructor's mobile telephone enables it to query students' mobile telephone via Bluetooth connection and, through transfer of students' mobile telephones' Media Access Control (MAC) addresses to the instructor's mobile telephone, presence of the student can be confirmed. Moreover, detailed record of a student's attendance can also be generated for printing and filing, if needed.",
"title": ""
},
{
"docid": "d2abcdcdb6650c30838507ec1521b263",
"text": "Deep neural networks (DNNs) have achieved great success in solving a variety of machine learning (ML) problems, especially in the domain of image recognition. However, recent research showed that DNNs can be highly vulnerable to adversarially generated instances, which look seemingly normal to human observers, but completely confuse DNNs. These adversarial samples are crafted by adding small perturbations to normal, benign images. Such perturbations, while imperceptible to the human eye, are picked up by DNNs and cause them to misclassify the manipulated instances with high confidence. In this work, we explore and demonstrate how systematic JPEG compression can work as an effective pre-processing step in the classification pipeline to counter adversarial attacks and dramatically reduce their effects (e.g., Fast Gradient Sign Method, DeepFool). An important component of JPEG compression is its ability to remove high frequency signal components, inside square blocks of an image. Such an operation is equivalent to selective blurring of the image, helping remove additive perturbations. Further, we propose an ensemble-based technique that can be constructed quickly from a given well-performing DNN, and empirically show how such an ensemble that leverages JPEG compression can protect a model from multiple types of adversarial attacks, without requiring knowledge about the model.",
"title": ""
},
{
"docid": "36f960b37e7478d8ce9d41d61195f83a",
"text": "An effective technique in locating a source based on intersections of hyperbolic curves defined by the time differences of arrival of a signal received at a number of sensors is proposed. The approach is noniterative and gives au explicit solution. It is an approximate realization of the maximum-likelihood estimator and is shown to attain the Cramer-Rao lower bound near the small error region. Comparisons of performance with existing techniques of beamformer, sphericat-interpolation, divide and conquer, and iterative Taylor-series methods are made. The proposed technique performs significantly better than sphericalinterpolation, and has a higher noise threshold than divide and conquer before performance breaks away from the Cramer-Rao lower bound. It provides an explicit solution form that is not available in the beamformmg and Taylor-series methods. Computational complexity is comparable to spherical-interpolation but substantially less than the Taylor-series method.",
"title": ""
},
{
"docid": "adcbc47e18f83745f776dec84d09559f",
"text": "Adaptive and flexible production systems require modular and reusable software especially considering their long-term life cycle of up to 50 years. SWMAT4aPS, an approach to measure Software Maturity for automated Production Systems is introduced. The approach identifies weaknesses and strengths of various companies’ solutions for modularity of software in the design of automated Production Systems (aPS). At first, a self-assessed questionnaire is used to evaluate a large number of companies concerning their software maturity. Secondly, we analyze PLC code, architectural levels, workflows and abilities to configure code automatically out of engineering information in four selected companies. In this paper, the questionnaire results from 16 German world-leading companies in machine and plant manufacturing and four case studies validating the results from the detailed analyses are introduced to prove the applicability of the approach and give a survey of the state of the art in industry. Keywords—factory automation, automated production systems, maturity, modularity, control software, Programmable Logic Controller.",
"title": ""
},
{
"docid": "fd0dccac0689390e77a0cc1fb14e5a34",
"text": "Chromatin remodeling is a complex process shaping the nucleosome landscape, thereby regulating the accessibility of transcription factors to regulatory regions of target genes and ultimately managing gene expression. The SWI/SNF (switch/sucrose nonfermentable) complex remodels the nucleosome landscape in an ATP-dependent manner and is divided into the two major subclasses Brahma-associated factor (BAF) and Polybromo Brahma-associated factor (PBAF) complex. Somatic mutations in subunits of the SWI/SNF complex have been associated with different cancers, while germline mutations have been associated with autism spectrum disorder and the neurodevelopmental disorders Coffin–Siris (CSS) and Nicolaides–Baraitser syndromes (NCBRS). CSS is characterized by intellectual disability (ID), coarsening of the face and hypoplasia or absence of the fifth finger- and/or toenails. So far, variants in five of the SWI/SNF subunit-encoding genes ARID1B, SMARCA4, SMARCB1, ARID1A, and SMARCE1 as well as variants in the transcription factor-encoding gene SOX11 have been identified in CSS-affected individuals. ARID2 is a member of the PBAF subcomplex, which until recently had not been linked to any neurodevelopmental phenotypes. In 2015, mutations in the ARID2 gene were associated with intellectual disability. In this study, we report on two individuals with private de novo ARID2 frameshift mutations. Both individuals present with a CSS-like phenotype including ID, coarsening of facial features, other recognizable facial dysmorphisms and hypoplasia of the fifth toenails. Hence, this study identifies mutations in the ARID2 gene as a novel and rare cause for a CSS-like phenotype and enlarges the list of CSS-like genes.",
"title": ""
},
{
"docid": "8722d7864499c76f76820b5f7f0c4fc6",
"text": "This paper proposes a new scientific integration of the classical and quantum fundamentals of neuropsychotherapy. The history, theory, research, and practice of neuropsychotherapy are reviewed and updated in light of the current STEM perspectives on science, technology, engineering, and mathematics. New technology is introduced to motivate more systematic research comparing the bioelectronic amplitudes of varying states of human stress, relaxation, biofeedback, creativity, and meditation. Case studies of the neuropsychotherapy of attention span, consciousness, cognition, chirality, and dissociation along with the psychodynamics of therapeutic hypnosis and chronic post-traumatic stress disorder (PTSD) are explored. Implications of neuropsychotheraputic research for investigating relationships between activity-dependent gene expression, brain plasticity, and the quantum qualia of consciousness and cognition are discussed. Symmetry in neuropsychotherapy is related to Noether’s theorem of nature’s conservation laws for a unified theory of physics, biology, and psychology on the quantum level. Neuropsychotheraputic theory, research, and practice is conceptualized as a common yardstick for integrating the fundamentals of physics, biology, and the psychology of consciousness, cognition, and behavior at the quantum level.",
"title": ""
},
{
"docid": "fb655a622c2e299b8d7f8b85769575b4",
"text": "With the substantial development of digital technologies in multimedia, network communication and user interfaces, we are seeing an increasing number of applications of these technologies, in particular in the entertainment domain. They include computer gaming, elearning, high-definition and interactive TVs, and virtual environments. The development of these applications typically involves the integration of existing technologies as well as the development of new technologies. This Introduction summarizes latest interactive entertainment technologies and applications, and briefly highlights some potential research directions. It also introduces the seven papers that are accepted to the special issue. Hopefully, this will provide the readers some insights into future research topics in interactive entertainment technologies and applications.",
"title": ""
},
{
"docid": "8eb3b8fb9420cc27ec17aa884531fa83",
"text": "Participation has emerged as an appropriate approach for enhancing natural resources management. However, despite long experimentation with participation, there are still possibilities for improvement in designing a process of stakeholder involvement by addressing stakeholder heterogeneity and the complexity of decision-making processes. This paper provides a state-of-the-art overview of methods. It proposes a comprehensive framework to implement stakeholder participation in environmental projects, from stakeholder identification to evaluation. For each process within this framework, techniques are reviewed and practical tools proposed. The aim of this paper is to establish methods to determine who should participate, when and how. The application of this framework to one river restoration case study in Switzerland will illustrate its strengths and weaknesses.",
"title": ""
},
{
"docid": "691f5f53582ceedaa51812307778b4db",
"text": "This paper looks at how a vulnerability management (VM) process could be designed & implemented within an organization. Articles and studies about VM usually focus mainly on the technology aspects of vulnerability scanning. The goal of this study is to call attention to something that is often overlooked: a basic VM process which could be easily adapted and implemented in any part of the organization. Implementing a vulnerability management process 2 Tom Palmaers",
"title": ""
},
{
"docid": "ff71aa2caed491f9bf7b67a5377b4d66",
"text": "In this paper, we propose a hybrid architecture that combines the image modeling strengths of the bag of words framework with the representational power and adaptability of learning deep architectures. Local gradient-based descriptors, such as SIFT, are encoded via a hierarchical coding scheme composed of spatial aggregating restricted Boltzmann machines (RBM). For each coding layer, we regularize the RBM by encouraging representations to fit both sparse and selective distributions. Supervised fine-tuning is used to enhance the quality of the visual representation for the categorization task. We performed a thorough experimental evaluation using three image categorization data sets. The hierarchical coding scheme achieved competitive categorization accuracies of 79.7% and 86.4% on the Caltech-101 and 15-Scenes data sets, respectively. The visual representations learned are compact and the model's inference is fast, as compared with sparse coding methods. The low-level representations of descriptors that were learned using this method result in generic features that we empirically found to be transferrable between different image data sets. Further analysis reveal the significance of supervised fine-tuning when the architecture has two layers of representations as opposed to a single layer.",
"title": ""
},
{
"docid": "4d5119db64e4e0a31064bd22b47e2534",
"text": "Reliability and scalability of an application is dependent on how its application state is managed. To run applications at massive scale requires one to operate datastores that can scale to operate seamlessly across thousands of servers and can deal with various failure modes such as server failures, datacenter failures and network partitions. The goal of Amazon DynamoDB is to eliminate this complexity and operational overhead for our customers by offering a seamlessly scalable database service. In this talk, I will talk about how developers can build applications on DynamoDB without having to deal with the complexity of operating a large scale database.",
"title": ""
},
{
"docid": "e8e8869d74dd4667ceff63c8a24caa27",
"text": "We address the problem of recommending suitable jobs to people who are seeking a new job. We formulate this recommendation problem as a supervised machine learning problem. Our technique exploits all past job transitions as well as the data associated with employees and institutions to predict an employee's next job transition. We train a machine learning model using a large number of job transitions extracted from the publicly available employee profiles in the Web. Experiments show that job transitions can be accurately predicted, significantly improving over a baseline that always predicts the most frequent institution in the data.",
"title": ""
},
{
"docid": "6d0ba36e4371cbd9aa7d136aec11f92d",
"text": "The DNS is a fundamental service that has been repeatedly attacked and abused. DNS manipulation is a prominent case: Recursive DNS resolvers are deployed to explicitly return manipulated answers to users' queries. While DNS manipulation is used for legitimate reasons too (e.g., parental control), rogue DNS resolvers support malicious activities, such as malware and viruses, exposing users to phishing and content injection. We introduce REMeDy, a system that assists operators to identify the use of rogue DNS resolvers in their networks. REMeDy is a completely automatic and parameter-free system that evaluates the consistency of responses across the resolvers active in the network. It operates by passively analyzing DNS traffic and, as such, requires no active probing of third-party servers. REMeDy is able to detect resolvers that manipulate answers, including resolvers that affect unpopular domains. We validate REMeDy using large-scale DNS traces collected in ISP networks where more than 100 resolvers are regularly used by customers. REMeDy automatically identifies regular resolvers, and pinpoint manipulated responses. Among those, we identify both legitimate services that offer additional protection to clients, and resolvers under the control of malwares that steer traffic with likely malicious goals.",
"title": ""
},
{
"docid": "3aa58539c69d6706bc0a9ca0256cdf80",
"text": "BACKGROUND\nAcne vulgaris is a prevalent skin disorder impairing both physical and psychosocial health. This study was designed to investigate the effectiveness of photodynamic therapy (PDT) combined with minocycline in moderate to severe facial acne and influence on quality of life (QOL).\n\n\nMETHODS\nNinety-five patients with moderate to severe facial acne (Investigator Global Assessment [IGA] score 3-4) were randomly treated with PDT and minocycline (n = 48) or minocycline alone (n = 47). All patients took minocycline hydrochloride 100 mg/d for 4 weeks, whereas patients in the minocycline plus PDT group also received 4 times PDT treatment 1 week apart. IGA score, lesion counts, Dermatology Life Quality Index (DLQI), and safety evaluation were performed before treatment and at 2, 4, 6, and 8 weeks after enrolment.\n\n\nRESULTS\nThere were no statistically significant differences in characteristics between 2 treatment groups at baseline. Minocycline plus PDT treatment led to a greater mean percentage reduction from baseline in lesion counts versus minocycline alone at 8 weeks for both inflammatory (-74.4% vs -53.3%; P < .001) and noninflammatory lesions (-61.7% vs -42.4%; P < .001). More patients treated with minocycline plus PDT achieved IGA score <2 at study end (week 8: 30/48 vs 20/47; P < .05). Patients treated with minocycline plus PDT got significant lower DLQI at 8 weeks (4.4 vs 6.3; P < .001). Adverse events were mild and manageable.\n\n\nCONCLUSIONS\nCompared with minocycline alone, the combination of PDT with minocycline significantly improved clinical efficacy and QOL in moderate to severe facial acne patients.",
"title": ""
},
{
"docid": "3257f01d96bd126bd7e3d6f447e0326d",
"text": "Voice SMS is an application developed in this work that allows a user to record and convert spoken messages into SMS text message. User can send messages to the entered phone number or the number of contact from the phonebook. Speech recognition is done via the Internet, connecting to Google's server. The application is adapted to input messages in English. Used tools are Android SDK and the installation is done on mobile phone with Android operating system. In this article we will give basic features of the speech recognition and used algorithm. Speech recognition for Voice SMS uses a technique based on hidden Markov models (HMM - Hidden Markov Model). It is currently the most successful and most flexible approach to speech recognition.",
"title": ""
},
{
"docid": "cb49d71778f873d2f21df73b9e781c8e",
"text": "Many people with mental health problems do not use mental health care, resulting in poorer clinical and social outcomes. Reasons for low service use rates are still incompletely understood. In this longitudinal, population-based study, we investigated the influence of mental health literacy, attitudes toward mental health services, and perceived need for treatment at baseline on actual service use during a 6-month follow-up period, controlling for sociodemographic variables, symptom level, and a history of lifetime mental health service use. Positive attitudes to mental health care, higher mental health literacy, and more perceived need at baseline significantly predicted use of psychotherapy during the follow-up period. Greater perceived need for treatment and better literacy at baseline were predictive of taking psychiatric medication during the following 6 months. Our findings suggest that mental health literacy, attitudes to treatment, and perceived need may be targets for interventions to increase mental health service use.",
"title": ""
},
{
"docid": "c0dd3979344c5f327fe447f46c13cffc",
"text": "Clinicians and researchers often ask patients to remember their past pain. They also use patient's reports of relief from pain as evidence of treatment efficacy, assuming that relief represents the difference between pretreatment pain and present pain. We have estimated the accuracy of remembering pain and described the relationship between remembered pain, changes in pain levels and reports of relief during treatment. During a 10-week randomized controlled clinical trial on the effectiveness of oral appliances for the management of chronic myalgia of the jaw muscles, subjects recalled their pretreatment pain and rated their present pain and perceived relief. Multiple regression analysis and repeated measures analyses of variance (ANOVA) were used for data analysis. Memory of the pretreatment pain was inaccurate and the errors in recall got significantly worse with the passage of time (P < 0.001). Accuracy of recall for pretreatment pain depended on the level of pain before treatment (P < 0.001): subjects with low pretreatment pain exaggerated its intensity afterwards, while it was underestimated by those with the highest pretreatment pain. Memory of pretreatment pain was also dependent on the level of pain at the moment of recall (P < 0.001). Ratings of relief increased over time (P < 0.001), and were dependent on both present and remembered pain (Ps < 0.001). However, true changes in pain were not significantly related to relief scores (P = 0.41). Finally, almost all patients reported relief, even those whose pain had increased. These results suggest that reports of perceived relief do not necessarily reflect true changes in pain.",
"title": ""
},
{
"docid": "b0c2d9130a48fc0df8f428460b949741",
"text": "A micro-strip patch antenna for a passive radio frequency identification (RFID) tag which can operate in the ultra high frequency (UHF) range from 865 MHz to 867 MHz is presented in this paper. The proposed antenna is designed and suitable for tagging the metallic boxes in the UK and Europe warehouse environment. The design is supplemented with the simulation results. In addition, the effect of the antenna substrate thickness and the ground plane on the performance of the proposed antenna is also investigated. The study shows that there is little affect by the antenna substrate thickness on the performance.",
"title": ""
}
] |
scidocsrr
|
f08e8184637b33719f16b7ef132cd192
|
Co-Design of a CMOS Rectifier and Small Loop Antenna for Highly Sensitive RF Energy Harvesters
|
[
{
"docid": "d698ce3df2f1216b7b78237dcecb0df1",
"text": "A high-efficiency CMOS rectifier circuit for UHF RFIDs was developed. The rectifier has a cross-coupled bridge configuration and is driven by a differential RF input. A differential-drive active gate bias mechanism simultaneously enables both low ON-resistance and small reverse leakage of diode-connected MOS transistors, resulting in large power conversion efficiency (PCE), especially under small RF input power conditions. A test circuit of the proposed differential-drive rectifier was fabricated with 0.18 mu m CMOS technology, and the measured performance was compared with those of other types of rectifiers. Dependence of the PCE on the input RF signal frequency, output loading conditions and transistor sizing was also evaluated. At the single-stage configuration, 67.5% of PCE was achieved under conditions of 953 MHz, - 12.5 dBm RF input and 10 KOmega output load. This is twice as large as that of the state-of-the-art rectifier circuit. The peak PCE increases with a decrease in operation frequency and with an increase in output load resistance. In addition, experimental results show the existence of an optimum transistor size in accordance with the output loading conditions. The multi-stage configuration for larger output DC voltage is also presented.",
"title": ""
},
{
"docid": "21511302800cd18d21dbc410bec3cbb2",
"text": "We investigate theoretical and practical aspects of the design of far-field RF power extraction systems consisting of antennas, impedance matching networks and rectifiers. Fundamental physical relationships that link the operating bandwidth and range are related to technology dependent quantities like threshold voltage and parasitic capacitances. This allows us to design efficient planar antennas, coupled resonator impedance matching networks and low-power rectifiers in standard CMOS technologies (0.5-mum and 0.18-mum) and accurately predict their performance. Experimental results from a prototype power extraction system that operates around 950 MHz and integrates these components together are presented. Our measured RF power-up threshold (in 0.18-mum, at 1 muW load) was 6 muWplusmn10%, closely matching the predicted value of 5.2 muW.",
"title": ""
},
{
"docid": "4e8dbd3470028541cb53f70cefd54abd",
"text": "Design strategy and efficiency optimization of ultrahigh-frequency (UHF) micro-power rectifiers using diode-connected MOS transistors with very low threshold voltage is presented. The analysis takes into account the conduction angle, leakage current, and body effect in deriving the output voltage. Appropriate approximations allow analytical expressions for the output voltage, power consumption, and efficiency to be derived. A design procedure to maximize efficiency is presented. A superposition method is proposed to optimize the performance of multiple-output rectifiers. Constant-power scaling and area-efficient design are discussed. Using a 0.18-mum CMOS process with zero-threshold transistors, 900-MHz rectifiers with different conversion ratios were designed, and extensive HSPICE simulations show good agreement with the analysis. A 24-stage triple-output rectifier was designed and fabricated, and measurement results verified the validity of the analysis",
"title": ""
}
] |
[
{
"docid": "e730935b097cb4c4f36221d774d2e63a",
"text": "This paper outlines key design principles of Scilla—an intermediatelevel language for verified smart contracts. Scilla provides a clean separation between the communication aspect of smart contracts on a blockchain, allowing for the rich interaction patterns, and a programming component, which enjoys principled semantics and is amenable to formal verification. Scilla is not meant to be a high-level programming language, and we are going to use it as a translation target for high-level languages, such as Solidity, for performing program analysis and verification, before further compilation to an executable bytecode. We describe the automata-based model of Scilla, present its programming component and show how contract definitions in terms of automata streamline the process of mechanised verification of their safety and temporal properties.",
"title": ""
},
{
"docid": "901f94b231727cd3f17e9f0464337da2",
"text": "Vehicle dynamics is an essential topic in development of safety driving systems. These complex and integrated control units require precise information about vehicle dynamics, especially, tire/road contact forces. Nevertheless, it is lacking an effective and low-cost sensor to measure them directly. Therefore, this study presents a new method to estimate these parameters by using observer technologies and low-cost sensors which are available on the passenger cars in real environment. In our previous work, observers have been designed to estimate the vehicle tire/road contact forces and sideslip angles. However, the previous study just considered the situation of the vehicles running on a level road. In our recent study, vehicle mathematical models are reconstructed to suit banked road and inclined road. Then, Kalman Filter is used to improve the estimation of vehicle dynamics. Finally, the estimator is tested both on simulation CALLAS and on the experimental vehicle DYNA.",
"title": ""
},
{
"docid": "305efd1823009fe79c9f8ff52ddb5724",
"text": "We explore the problem of classifying images by the object categories they contain in the case of a large number of object categories. To this end we combine three ingredients: (i) shape and appearance representations that support spatial pyramid matching over a region of interest. This generalizes the representation of Lazebnik et al., (2006) from an image to a region of interest (ROI), and from appearance (visual words) alone to appearance and local shape (edge distributions); (ii) automatic selection of the regions of interest in training. This provides a method of inhibiting background clutter and adding invariance to the object instance 's position; and (iii) the use of random forests (and random ferns) as a multi-way classifier. The advantage of such classifiers (over multi-way SVM for example) is the ease of training and testing. Results are reported for classification of the Caltech-101 and Caltech-256 data sets. We compare the performance of the random forest/ferns classifier with a benchmark multi-way SVM classifier. It is shown that selecting the ROI adds about 5% to the performance and, together with the other improvements, the result is about a 10% improvement over the state of the art for Caltech-256.",
"title": ""
},
{
"docid": "98cb849504f344253bc879704c698f1e",
"text": "Serverless computing provides a small runtime container to execute lines of codes without infrastructure management which is similar to Platform as a Service (PaaS) but a functional level. Amazon started the event-driven compute named Lambda functions in 2014 with a 25 concurrent limitation, but it now supports at least a thousand of concurrent invocation to process event messages generated by resources like databases, storage and system logs. Other providers, i.e., Google, Microsoft, and IBM offer a dynamic scaling manager to handle parallel requests of stateless functions in which additional containers are provisioning on new compute nodes for distribution. However, while functions are often developed for microservices and lightweight workload, they are associated with distributed data processing using the concurrent invocations. We claim that the current serverless computing environments can support dynamic applications in parallel when a partitioned task is executable on a small function instance. We present results of throughput, network bandwidth, a file I/O and compute performance regarding the concurrent invocations. We deployed a series of functions for distributed data processing to address the elasticity and then demonstrated the differences between serverless computing and virtual machines for cost efficiency and resource utilization.",
"title": ""
},
{
"docid": "a1fe64aacbbe80a259feee2874645f09",
"text": "Database consolidation is gaining wide acceptance as a means to reduce the cost and complexity of managing database systems. However, this new trend poses many interesting challenges for understanding and predicting system performance. The consolidated databases in multi-tenant settings share resources and compete with each other for these resources. In this work we present an experimental study to highlight how these interactions can be fairly complex. We argue that individual database staging or workload profiling is not an adequate approach to understanding the performance of the consolidated system. Our initial investigations suggest that machine learning approaches that use monitored data to model the system can work well for important tasks.",
"title": ""
},
{
"docid": "d822157e1fd65e8ec6da4601deb65b06",
"text": "Bartholin's duct cysts and gland abscesses are common problems in women of reproductive age. Bartholin's glands are located bilaterally at the posterior introitus and drain through ducts that empty into the vestibule at approximately the 4 o'clock and 8 o'clock positions. These normally pea-sized glands are palpable only if the duct becomes cystic or a gland abscess develops. The differential diagnosis includes cystic and solid lesions of the vulva, such as epidermal inclusion cyst, Skene's duct cyst, hidradenoma papilliferum, and lipoma. The goal of management is to preserve the gland and its function if possible. Office-based procedures include insertion of a Word catheter for a duct cyst or gland abscess, and marsupialization of a cyst; marsupialization should not be used to treat a gland abscess. Broad-spectrum antibiotic therapy is warranted only when cellulitis is present. Excisional biopsy is reserved for use in ruling out adenocarcinoma in menopausal or perimenopausal women with an irregular, nodular Bartholin's gland mass.",
"title": ""
},
{
"docid": "0e68120ea21beb2fdaff6538aa342aa5",
"text": "The development of a truly non-invasive continuous glucose sensor is an elusive goal. We describe the rise and fall of the Pendra device. In 2000 the company Pendragon Medical introduced a truly non-invasive continuous glucose-monitoring device. This system was supposed to work through so-called impedance spectroscopy. Pendra was Conformité Européenne (CE) approved in May 2003. For a short time the Pendra was available on the Dutch direct-to-consumer market. A post-marketing reliability study was performed in six type 1 diabetes patients. Mean absolute difference between Pendra glucose values and values obtained through self-monitoring of blood glucose was 52%; the Pearson’s correlation coefficient was 35.1%; and a Clarke error grid showed 4.3% of the Pendra readings in the potentially dangerous zone E. We argue that the CE certification process for continuous glucose sensors should be made more transparent, and that a consensus on specific requirements for continuous glucose sensors is needed to prevent patient exposure to potentially dangerous situations.",
"title": ""
},
{
"docid": "72138b8acfb7c9e11cfd92c0b78a737c",
"text": "We study the task of entity linking for tweets, which tries to associate each mention in a tweet with a knowledge base entry. Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. To address these challenges, we propose a collective inference method that simultaneously resolves a set of mentions. Particularly, our model integrates three kinds of similarities, i.e., mention-entry similarity, entry-entry similarity, and mention-mention similarity, to enrich the context for entity linking, and to address irregular mentions that are not covered by the entity-variation dictionary. We evaluate our method on a publicly available data set and demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "6aebae4d8ed0af23a38a945b85c3b6ff",
"text": "Modern web applications are conglomerations of JavaScript written by multiple authors: application developers routinely incorporate code from third-party libraries, and mashup applications synthesize data and code hosted at different sites. In current browsers, a web application’s developer and user must trust third-party code in libraries not to leak the user’s sensitive information from within applications. Even worse, in the status quo, the only way to implement some mashups is for the user to give her login credentials for one site to the operator of another site. Fundamentally, today’s browser security model trades privacy for flexibility because it lacks a sufficient mechanism for confining untrusted code. We present COWL, a robust JavaScript confinement system for modern web browsers. COWL introduces label-based mandatory access control to browsing contexts in a way that is fully backwardcompatible with legacy web content. We use a series of case-study applications to motivate COWL’s design and demonstrate how COWL allows both the inclusion of untrusted scripts in applications and the building of mashups that combine sensitive information from multiple mutually distrusting origins, all while protecting users’ privacy. Measurements of two COWL implementations, one in Firefox and one in Chromium, demonstrate a virtually imperceptible increase in page-load latency.",
"title": ""
},
{
"docid": "638cd8a942b13e1fa80c56e1f1fa1318",
"text": "Realistic behavior of deformable objects is essential for many applications such as simulation for surgical training. Existing techniques of deformable modeling for real time simulation have either used approximate methods that are not physically accurate or linear methods that do not produce reasonable global behavior. Nonlinear finite element methods (FEM) are globally accurate, but conventional FEM is not real time. In this paper, we apply nonlinear FEM using mass lumping to produce a diagonal mass matrix that allows real time computation. Adaptive meshing is necessary to provide sufficient detail where required while minimizing unnecessary computation. We propose a scheme for mesh adaptation based on an extension of the progressive mesh concept, which we call dynamic progressive meshes.",
"title": ""
},
{
"docid": "70313633b2694adbaea3e82b30b1ca51",
"text": "The Global Assessment Scale (GAS) is a rating scale for evaluating the overall functioning of a subject during a specified time period on a continuum from psychological or psychiatric sickness to health. In five studies encompassing the range of population to which measures of overall severity of illness are likely to be applied, the GAS was found to have good reliability. GAS ratings were found to have a greater sensitivity to change over time than did other ratings of overall severity or specific symptom dimensions. Former inpatients in the community with a GAS rating below 40 had a higher probability of readmission to the hospital than did patients with higher GAS scores. The relative simplicity, reliability, and validity of the GAS suggests that it would be useful in a wide variety of clinical and research settings.",
"title": ""
},
{
"docid": "f3590467f740bc575e995389c9cc3684",
"text": "Action recognition has become a very important topic in computer vision, with many fundamental applications, in robotics, video surveillance, human–computer interaction, and multimedia retrieval among others and a large variety of approaches have been described. The purpose of this survey is to give an overview and categorization of the approaches used. We concentrate on approaches that aim on classification of full-body motions, such as kicking, punching, and waving, and we categorize them according to how they represent the spatial and temporal structure of actions; how they segment actions from an input stream of visual data; and how they learn a view-invariant representation of actions. 2010 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "05518ac3a07fdfb7bfede8df8a7a500b",
"text": "The prevalence of food allergy is rising for unclear reasons, with prevalence estimates in the developed world approaching 10%. Knowledge regarding the natural course of food allergies is important because it can aid the clinician in diagnosing food allergies and in determining when to consider evaluation for food allergy resolution. Many food allergies with onset in early childhood are outgrown later in childhood, although a minority of food allergy persists into adolescence and even adulthood. More research is needed to improve food allergy diagnosis, treatment, and prevention.",
"title": ""
},
{
"docid": "4282e931ced3f8776f6c4cffb5027f61",
"text": "OBJECTIVES\nTo provide an overview and tutorial of natural language processing (NLP) and modern NLP-system design.\n\n\nTARGET AUDIENCE\nThis tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art.\n\n\nSCOPE\nWe describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field.",
"title": ""
},
{
"docid": "0e766418af18260be49c41050f571595",
"text": "In this article we present a survey on threats and vulnerability attacks on Bluetooth security mechanism. Bluetooth is the personal area network (PAN). It is the kind of wireless Ad hoc network. Low cost, low power, low complexity and robustness are the basic features of Bluetooth. It works on Radio frequency. Bluetooth Technology has many benefits like replacement of cable, easy file sharing, wireless synchronization and internet connectivity. As Bluetooth Technology becomes widespread, vulnerabilities in its security protocols are increasing which can be potentially dangerous to the privacy of a user’s personal information. Security in Bluetooth communication has been an active area of research for last few years. The article presents various security threats and vulnerability attacks on Bluetooth technology. Keywords— Bluetooth security; security protocol; vulnerability; security threats; bluejacking; eavesdropping; malicious attackers.",
"title": ""
},
{
"docid": "625a2f49cc032398be8514c5022956a7",
"text": "Product recommendation systems are important for major movie studios during the movie greenlight process and as part of machine learning personalization pipelines. Collaborative Filtering (CF) models have proved to be effective at powering recommender systems for online streaming services with explicit customer feedback data. CF models do not perform well in scenarios in which feedback data is not available, in ‘cold start’ situations like new product launches, and situations with markedly different customer tiers (e.g., high frequency customers vs. casual customers). Generative natural language models that create useful theme-based representations of an underlying corpus of documents can be used to represent new product descriptions, like new movie plots. When combined with CF, they have shown to increase the performance in ‘cold start’ situations. Outside of those cases though in which explicit customer feedback is available, recommender engines must rely on binary purchase data, which materially degrades performance. Fortunately, purchase data can be combined with product descriptions to generate meaningful representations of products and customer trajectories in a convenient product space in which proximity represents similarity (in the case of product-toproduct comparisons) and affinity (in the case of customer-toproduct comparisons). Learning to measure the distance between points in this space can be accomplished with a deep neural network that trains on customer histories and on dense vectorizations of product descriptions. We developed a system based on Collaborative (Deep) Metric Learning (CML) to predict the purchase probabilities of new theatrical releases. We trained and evaluated the model using a large dataset of customer histories spanning multiple years, and tested the model for a set of movies that were released outside of the training window. Initial experiments show gains relative to models that don’t train on collaborative preferences.",
"title": ""
},
{
"docid": "0a4749ecc23cb04f494a987268704f0f",
"text": "With the growing demand for digital information in health care, the electronic medical record (EMR) represents the foundation of health information technology. It is essential, however, in an industry still largely dominated by paper-based records, that such systems be accepted and used. This research evaluates registered nurses’, certified nurse practitioners and physician assistants’ acceptance of EMR’s as a means to predict, define and enhance use. The research utilizes the Unified Theory of Acceptance and Use of Technology (UTAUT) as the theoretical model, along with the Partial Least Square (PLS) analysis to estimate the variance. Overall, the findings indicate that UTAUT is able to provide a reasonable assessment of health care professionals’ acceptance of EMR’s with social influence a significant determinant of intention and use.",
"title": ""
},
{
"docid": "8ead9a0e083a65ef5cb5b3f7e9aea5be",
"text": "In this paper, a new resonant gate-drive circuit is proposed to recover a portion of the power-MOSFET-gate energy that is typically dissipated in high-frequency converters. The proposed circuit consists of four control switches and a small resonant inductance. The current through the resonant inductance is discontinuous in order to minimize circulating-current conduction loss that is present in other methods. The proposed circuit also achieves quick turn-on and turn-off transition times to reduce switching and conduction losses in power MOSFETs. An analysis, a design procedure, and experimental results are presented for the proposed circuit. Experimental results demonstrate that the proposed driver can recover 51% of the gate energy at 5-V gate-drive voltage.",
"title": ""
},
{
"docid": "79ff4bd891538a0d1b5a002d531257f2",
"text": "Reverse conducting IGBTs are fabricated in a large productive volume for soft switching applications, such as inductive heaters, microwave ovens or lamp ballast, since several years. To satisfy the requirements of hard switching applications, such as inverters in refrigerators, air conditioners or general purpose drives, the reverse recovery behavior of the integrated diode has to be optimized. Two promising concepts for such an optimization are based on a reduction of the charge- carrier lifetime or the anti-latch p+ implantation dose. It is shown that a combination of both concepts will lead to a device with a good reverse recovery behavior, low forward and reverse voltage drop and excellent over current turn- off capability of a trench field-stop IGBT.",
"title": ""
},
{
"docid": "3f103ae85438617e950bdc0cea72cd8b",
"text": "In this paper, we implement a novel parallelized approach of Local Binary Pattern (LBP) based face recognition algorithm on GPU. High performance rates have been achieved through maximizing the resource exploitation available in the GPU. The launch of GPU programming tools like Open source Computation Language (OpenCL) and (CUDA) have boosted the development of various applications on GPU. In this paper we implement a parallelized LBP algorithm on GPU using OpenCL programming tools. Programs developed under the OpenCL enable us to utilize GPU for general purpose computation with increased performance efficiency in terms of execution time. The experimental results based on the implementation on AMD 6500 GPU processor are observed to increase the computational performance of the system by to 30 folds in case of 1024×1024 images. The relative computational efficiency increases with increase in the size of the Image. This paper addresses several parallelization problems related to memory access and updating, divergent execution paths, understanding and realizing the OpenCL's concurrency and Execution models.",
"title": ""
}
] |
scidocsrr
|
044965d98a98b3f69de5218a3629a2de
|
Can Natural Language Processing Become Natural Language Coaching?
|
[
{
"docid": "8788f14a2615f3065f4f0656a4a66592",
"text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
}
] |
[
{
"docid": "5ce4d44c4796a8fa506acf02074496f8",
"text": "Focus and scope The focus of the workshop was applications of logic programming, i.e., application problems, in whole or in part, that are solved by using logic programming languages and systems. A particular theme of interest was to explore the ease of development and maintenance, clarity, performance, and tradeoffs among these features, brought about by programming using a logic paradigm. The goal was to help provide directions for future research advances and application development. Real-world problems increasingly involve complex data and logic, making the use of logic programming more and more beneficial for such complex applications. Despite the diverse areas of application, their common underlying requirements are centered around ease of development and maintenance, clarity, performance, integration with other tools, and tradeoffs among these properties. Better understanding of these important principles will help advance logic programming research and lead to benefits for logic programming applications. The workshop was organized around four main areas of application: Enterprise Software, Control Systems, Intelligent Agents, and Deep Analysis. These general areas included topics such as business intelligence, ontology management, text processing, program analysis, model checking, access control, network programming, resource allocation, system optimization, decision making, and policy administration. The issues proposed for discussion included language features, implementation efficiency, tool support and integration, evaluation methods, as well as teaching and training.",
"title": ""
},
{
"docid": "0af670278702a8680401ceeb421a05f2",
"text": "We investigate semisupervised learning (SL) and pool-based active learning (AL) of a classifier for domains with label-scarce (LS) and unknown categories, i.e., defined categories for which there are initially no labeled examples. This scenario manifests, e.g., when a category is rare, or expensive to label. There are several learning issues when there are unknown categories: 1) it is a priori unknown which subset of (possibly many) measured features are needed to discriminate unknown from common classes and 2) label scarcity suggests that overtraining is a concern. Our classifier exploits the inductive bias that an unknown class consists of the subset of the unlabeled pool’s samples that are atypical (relative to the common classes) with respect to certain key (albeit a priori unknown) features and feature interactions. Accordingly, we treat negative log- $p$ -values on raw features as nonnegatively weighted derived feature inputs to our class posterior, with zero weights identifying irrelevant features. Through a hierarchical class posterior, our model accommodates multiple common classes, multiple LS classes, and unknown classes. For learning, we propose a novel semisupervised objective customized for the LS/unknown category scenarios. While several works minimize class decision uncertainty on unlabeled samples, we instead preserve this uncertainty [maximum entropy (maxEnt)] to avoid overtraining. Our experiments on a variety of UCI Machine learning (ML) domains show: 1) the use of $p$ -value features coupled with weight constraints leads to sparse solutions and gives significant improvement over the use of raw features and 2) for LS SL and AL, unlabeled samples are helpful, and should be used to preserve decision uncertainty (maxEnt), rather than to minimize it, especially during the early stages of AL. Our AL system, leveraging a novel sample-selection scheme, discovers unknown classes and discriminates LS classes from common ones, with sparing use of oracle labeling.",
"title": ""
},
{
"docid": "c3dd3dd59afe491fcc6b4cd1e32c88a3",
"text": "The Semantic Web drives towards the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3Logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3Logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3Logic and illustrate through examples why it is an appropriate logic for the Web.",
"title": ""
},
{
"docid": "8d0ccf63b21af19cb750eb571fc59ae6",
"text": "This paper presents a motor imagery based Brain Computer Interface (BCI) that uses single channel EEG signal from the C3 or C4 electrode placed in the motor area of the head. Time frequency analysis using Short Time Fourier Transform (STFT) is used to compute spectrogram from the EEG data. The STFT is scaled to have gray level values on which Grey Co-occurrence Matrix (GLCM) is computed. Texture descriptors such as correlation, energy, contrast, homogeneity and dissimilarity are calculated from the GLCM matrices. The texture descriptors are used to train a logistic regression classifier which is then used to classify the left and right motor imagery signals. The single-channel motor imagery classification system is tested offline with different subjects. The average offline accuracy is 87.6%. An online BCI system is implemented in openViBE with the single channel classification scheme. The stimuli presentations and feedback are implemented in Python and integrated with the openViBe BCI system.",
"title": ""
},
{
"docid": "57ccd593f1be27463f9e609d700452dd",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Sustainable supply chain network design: An optimization-oriented review Majid Eskandarpour, Pierre Dejax, Joe Miemczyk, Olivier Péton",
"title": ""
},
{
"docid": "d2f36cc750703f5bbec2ea3ef4542902",
"text": "ixed reality (MR) is a kind of virtual reality (VR) but a broader concept than augmented reality (AR), which augments the real world with synthetic electronic data. On the opposite side, there is a term, augmented virtuality (AV), which enhances or augments the virtual environment (VE) with data from the real world. Mixed reality covers a continuum from AR to AV. This concept embraces the definition of MR stated by Paul Milgram. 1 We participated in the Key Technology Research Project on Mixed Reality Systems (MR Project) in Japan. The Japanese government and Canon funded the Mixed Reality Systems Laboratory (MR Lab) and launched it in January 1997. We completed this national project in March 2001. At the end of the MR Project, an event called MiRai-01 (mirai means future in Japanese) was held at Yokohama, Japan, to demonstrate this emerging technology all over the world. This event was held in conjunction with two international conferences, IEEE Virtual Reality 2001 and the Second International Symposium on Mixed Reality (ISMR) and aggregated about 3,000 visitors for two days. This project aimed to produce an innovative information technology that could be used in the first decade of the 21st century while expanding the limitations of traditional VR technology. The basic policy we maintained throughout this project was to emphasize a pragmatic system development rather than a theory and to make such a system always available to people. Since MR is an advanced form of VR, the MR system inherits a VR char-acteristic—users can experience the world of MR interactively. According to this policy, we tried to make the system work in real time. Then, we enhanced each of our systems in their response speed and image quality in real time to increase user satisfaction. We describe the aim and research themes of the MR Project in Tamura et al. 2 To develop MR systems along this policy, we studied the fundamental problems of AR and AV and developed several methods to solve them in addition to system development issues. For example, we created a new image-based rendering method for AV systems, hybrid registration methods, and new types of see-through head-mounted displays (ST-HMDs) for AR systems. Three universities in Japan—University of Tokyo (Michi-taka Hirose), University of Tsukuba (Yuichic Ohta), and Hokkaido University (Tohru Ifukube)—collaborated with us to study the broad research area of MR. The side-bar, \" Four Types of MR Visual Simulation, …",
"title": ""
},
{
"docid": "de6f4705f2d0f829c90e69c0f03a6b6f",
"text": "This paper investigates the opportunities and challenges in the use of dynamic radio transmit power control for prolonging the lifetime of body-wearable sensor devices used in continuous health monitoring. We first present extensive empirical evidence that the wireless link quality can change rapidly in body area networks, and a fixed transmit power results in either wasted energy (when the link is good) or low reliability (when the link is bad). We quantify the potential gains of dynamic power control in body-worn devices by benchmarking off-line the energy savings achievable for a given level of reliability.We then propose a class of schemes feasible for practical implementation that adapt transmit power in real-time based on feedback information from the receiver. We profile their performance against the offline benchmark, and provide guidelines on how the parameters can be tuned to achieve the desired trade-off between energy savings and reliability within the chosen operating environment. Finally, we implement and profile our scheme on a MicaZ mote based platform, and also report preliminary results from the ultra-low-power integrated healthcare monitoring platform we are developing at Toumaz Technology.",
"title": ""
},
{
"docid": "1350f4e274947881f4562ab6596da6fd",
"text": "Calls for widespread Computer Science (CS) education have been issued from the White House down and have been met with increased enrollment in CS undergraduate programs. Yet, these programs often suffer from high attrition rates. One successful approach to addressing the problem of low retention has been a focus on group work and collaboration. This paper details the design of a collaborative ITS (CIT) for foundational CS concepts including basic data structures and algorithms. We investigate the benefit of collaboration to student learning while using the CIT. We compare learning gains of our prior work in a non-collaborative system versus two methods of supporting collaboration in the collaborative-ITS. In our study of 60 students, we found significant learning gains for students using both versions. We also discovered notable differences related to student perception of tutor helpfulness which we will investigate in subsequent work.",
"title": ""
},
{
"docid": "a753be5a5f81ae77bfcb997a2748d723",
"text": "The design of electromagnetic (EM) interference filters for converter systems is usually based on measurements with a prototype during the final stages of the design process. Predicting the conducted EM noise spectrum of a converter by simulation in an early stage has the potential to save time/cost and to investigate different noise reduction methods, which could, for example, influence the layout or the design of the control integrated circuit. Therefore, the main sources of conducted differential-mode (DM) and common-mode (CM) noise of electronic ballasts for fluorescent lamps are identified in this paper. For each source, the noise spectrum is calculated and a noise propagation model is presented. The influence of the line impedance stabilizing network (LISN) and the test receiver is also included. Based on the presented models, noise spectrums are calculated and validated by measurements.",
"title": ""
},
{
"docid": "30941e0bc8575047d1adc8c20983823b",
"text": "The world has changed dramatically for wind farm operators and service providers in the last decade. Organizations whose turbine portfolios was counted in 10-100s ten years ago are now managing large scale operation and service programs for fleet sizes well above one thousand turbines. A big challenge such organizations now face is the question of how the massive amount of operational data that are generated by large fleets are effectively managed and how value is gained from the data. A particular hard challenge is the handling of data streams collected from advanced condition monitoring systems. These data are highly complex and typically require expert knowledge to interpret correctly resulting in poor scalability when moving to large Operation and Maintenance (O&M) platforms.",
"title": ""
},
{
"docid": "5dda89fbe7f5757588b5dff0e6c2565d",
"text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight gures to be more attractive than normal or overweight gures, regardless of WHR. The female gure with the high WHR (0.86) was judged to be more attractive than the gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These ndings lend stronger support to sociocultural rather than evolutionary hypotheses.",
"title": ""
},
{
"docid": "ba7f157187fec26847c10fa772d71665",
"text": "We describe an implementation of the Hopcroft and Tarjan planarity test and em bedding algorithm The program tests the planarity of the input graph and either constructs a combinatorial embedding if the graph is planar or exhibits a Kuratowski subgraph if the graph is non planar",
"title": ""
},
{
"docid": "d8c4e6632f90c3dd864be93db881a382",
"text": "Document understanding techniques such as document clustering and multidocument summarization have been receiving much attention recently. Current document clustering methods usually represent the given collection of documents as a document-term matrix and then conduct the clustering process. Although many of these clustering methods can group the documents effectively, it is still hard for people to capture the meaning of the documents since there is no satisfactory interpretation for each document cluster. A straightforward solution is to first cluster the documents and then summarize each document cluster using summarization methods. However, most of the current summarization methods are solely based on the sentence-term matrix and ignore the context dependence of the sentences. As a result, the generated summaries lack guidance from the document clusters. In this article, we propose a new language model to simultaneously cluster and summarize documents by making use of both the document-term and sentence-term matrices. By utilizing the mutual influence of document clustering and summarization, our method makes; (1) a better document clustering method with more meaningful interpretation; and (2) an effective document summarization method with guidance from document clustering. Experimental results on various document datasets show the effectiveness of our proposed method and the high interpretability of the generated summaries.",
"title": ""
},
{
"docid": "a0b862a758c659b62da2114143bf7687",
"text": "The class imbalanced problem occurs in various disciplines when one of target classes has a tiny number of instances comparing to other classes. A typical classifier normally ignores or neglects to detect a minority class due to the small number of class instances. SMOTE is one of over-sampling techniques that remedies this situation. It generates minority instances within the overlapping regions. However, SMOTE randomly synthesizes the minority instances along a line joining a minority instance and its selected nearest neighbours, ignoring nearby majority instances. Our technique called SafeLevel-SMOTE carefully samples minority instances along the same line with different weight degree, called safe level. The safe level computes by using nearest neighbour minority instances. By synthesizing the minority instances more around larger safe level, we achieve a better accuracy performance than SMOTE and Borderline-SMOTE.",
"title": ""
},
{
"docid": "ce74305a30bd322a78b3827921ae7224",
"text": "While computerised tomography (CT) may have been the first imaging tool to study human brain, it has not yet been implemented into clinical decision making process for diagnosis of Alzheimer's disease (AD). On the other hand, with the nature of being prevalent, inexpensive and non-invasive, CT does present diagnostic features of AD to a great extent. This study explores the significance and impact on the application of the burgeoning deep learning techniques to the task of classification of CT brain images, in particular utilising convolutional neural network (CNN), aiming at providing supplementary information for the early diagnosis of Alzheimer's disease. Towards this end, three categories of CT images (N = 285) are clustered into three groups, which are AD, lesion (e.g. tumour) and normal ageing. In addition, considering the characteristics of this collection with larger thickness along the direction of depth (z) (~3-5 mm), an advanced CNN architecture is established integrating both 2D and 3D CNN networks. The fusion of the two CNN networks is subsequently coordinated based on the average of Softmax scores obtained from both networks consolidating 2D images along spatial axial directions and 3D segmented blocks respectively. As a result, the classification accuracy rates rendered by this elaborated CNN architecture are 85.2%, 80% and 95.3% for classes of AD, lesion and normal respectively with an average of 87.6%. Additionally, this improved CNN network appears to outperform the others when in comparison with 2D version only of CNN network as well as a number of state of the art hand-crafted approaches. As a result, these approaches deliver accuracy rates in percentage of 86.3, 85.6 ± 1.10, 86.3 ± 1.04, 85.2 ± 1.60, 83.1 ± 0.35 for 2D CNN, 2D SIFT, 2D KAZE, 3D SIFT and 3D KAZE respectively. The two major contributions of the paper constitute a new 3-D approach while applying deep learning technique to extract signature information rooted in both 2D slices and 3D blocks of CT images and an elaborated hand-crated approach of 3D KAZE.",
"title": ""
},
{
"docid": "22572c36ce1b816ee30ef422cb290dea",
"text": "Visual context is important in object recognition and it is still an open problem in computer vision. Along with the advent of deep convolutional neural networks (CNN), using contextual information with such systems starts to receive attention in the literature. At the same time, aerial imagery is gaining momentum. While advances in deep learning make good progress in aerial image analysis, this problem still poses many great challenges. Aerial images are often taken under poor lighting conditions and contain low resolution objects, many times occluded by trees or taller buildings. In this domain, in particular, visual context could be of great help, but there are still very few papers that consider context in aerial image understanding. Here we introduce context as a complementary way of recognizing objects. We propose a dual-stream deep neural network model that processes information along two independent pathways, one for local and another for global visual reasoning. The two are later combined in the final layers of processing. Our model learns to combine local object appearance as well as information from the larger scene at the same time and in a complementary way, such that together they form a powerful classifier. We test our dual-stream network on the task of segmentation of buildings and roads in aerial images and obtain state-of-the-art results on the Massachusetts Buildings Dataset. We also introduce two new datasets, for buildings and road segmentation, respectively, and study the relative importance of local appearance vs. the larger scene, as well as their performance in combination. While our local-global model could also be useful in general recognition tasks, we clearly demonstrate the effectiveness of visual context in conjunction with deep nets for aerial image",
"title": ""
},
{
"docid": "9b3db8c2632ad79dc8e20435a81ef2a1",
"text": "Social networks have changed the way information is delivered to the customers, shifting from traditional one-to-many to one-to-one communication. Opinion mining and sentiment analysis offer the possibility to understand the user-generated comments and explain how a certain product or a brand is perceived. Classification of different types of content is the first step towards understanding the conversation on the social media platforms. Our study analyses the content shared on Facebook in terms of topics, categories and shared sentiment for the domain of a sponsored Facebook brand page. Our results indicate that Product, Sales and Brand are the three most discussed topics, while Requests and Suggestions, Expressing Affect and Sharing are the most common intentions for participation. We discuss the implications of our findings for social media marketing and opinion mining.",
"title": ""
},
{
"docid": "5ccb3ab32054741928b8b93eea7a9ce2",
"text": "A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given.",
"title": ""
},
{
"docid": "2fa356bb47bf482f8585c882ad5d9409",
"text": "As an important arithmetic module, the adder plays a key role in determining the speed and power consumption of a digital signal processing (DSP) system. The demands of high speed and power efficiency as well as the fault tolerance nature of some applications have promoted the development of approximate adders. This paper reviews current approximate adder designs and provides a comparative evaluation in terms of both error and circuit characteristics. Simulation results show that the equal segmentation adder (ESA) is the most hardware-efficient design, but it has the lowest accuracy in terms of error rate (ER) and mean relative error distance (MRED). The error-tolerant adder type II (ETAII), the speculative carry select adder (SCSA) and the accuracy-configurable approximate adder (ACAA) are equally accurate (provided that the same parameters are used), however ETATII incurs the lowest power-delay-product (PDP) among them. The almost correct adder (ACA) is the most power consuming scheme with a moderate accuracy. The lower-part-OR adder (LOA) is the slowest, but it is highly efficient in power dissipation.",
"title": ""
},
{
"docid": "0eca851ca495916502788c9931d1c1f3",
"text": "Information in various applications is often expressed as character sequences over a finite alphabet (e.g., DNA or protein sequences). In Big Data era, the lengths and sizes of these sequences are growing explosively, leading to grand challenges for the classical NP-hard problem, namely searching for the Multiple Longest Common Subsequences (MLCS) from multiple sequences. In this paper, we first unveil the fact that the state-of-the-art MLCS algorithms are unable to be applied to long and large-scale sequences alignments. To overcome their defects and tackle the longer and large-scale or even big sequences alignments, based on the proposed novel problem-solving model and various strategies, e.g., parallel topological sorting, optimal calculating, reuse of intermediate results, subsection calculation and serialization, etc., we present a novel parallel MLCS algorithm. Exhaustive experiments on the datasets of both synthetic and real-world biological sequences demonstrate that both the time and space of the proposed algorithm are only linear in the number of dominants from aligned sequences, and the proposed algorithm significantly outperforms the state-of-the-art MLCS algorithms, being applicable to longer and large-scale sequences alignments.",
"title": ""
}
] |
scidocsrr
|
7a37d5a06686520063f899ab51cbab9c
|
EMMA: A New Platform to Evaluate Hardware-based Mobile Malware Analyses
|
[
{
"docid": "2f2801e502492a648a0758b6e33fe19d",
"text": "Intel is developing the Intel® Software Guard Extensions (Intel® SGX) technology, an extension to Intel® Architecture for generating protected software containers. The container is referred to as an enclave. Inside the enclave, software’s code, data, and stack are protected by hardware enforced access control policies that prevent attacks against the enclave’s content. In an era where software and services are deployed over the Internet, it is critical to be able to securely provision enclaves remotely, over the wire or air, to know with confidence that the secrets are protected and to be able to save secrets in non-volatile memory for future use. This paper describes the technology components that allow provisioning of secrets to an enclave. These components include a method to generate a hardware based attestation of the software running inside an enclave and a means for enclave software to seal secrets and export them outside of the enclave (for example store them in non-volatile memory) such that only the same enclave software would be able un-seal them back to their original form.",
"title": ""
}
] |
[
{
"docid": "9f5b61ad41dceff67ab328791ed64630",
"text": "In this paper we present a resource-adaptive framework for real-time vision-aided inertial navigation. Specifically, we focus on the problem of visual-inertial odometry (VIO), in which the objective is to track the motion of a mobile platform in an unknown environment. Our primary interest is navigation using miniature devices with limited computational resources, similar for example to a mobile phone. Our proposed estimation framework consists of two main components: (i) a hybrid EKF estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-based SLAM, and (ii) an adaptive image-processing module that adjusts the number of detected image features based oadaptive image-processing module that adjusts the number of detected image features based on the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework isn the availability of resources. By combining the hybrid EKF estimator, which optimally utilizes the feature measurements, with the adaptive image-processing algorithm, the proposed estimation architecture fully utilizes the system's computational resources. We present experimental results showing that the proposed estimation framework is capable of real-time processing of image and inertial data on the processor of a mobile phone.",
"title": ""
},
{
"docid": "de64aaa37e53beacb832d3686b293a9b",
"text": "By using a population-based cohort of the general Dutch population, the authors studied whether an excessively negative orientation toward pain (pain catastrophizing) and fear of movement/(re)injury (kinesiophobia) are important in the etiology of chronic low back pain and associated disability, as clinical studies have suggested. A total of 1,845 of the 2,338 inhabitants (without severe disease) aged 25-64 years who participated in a 1998 population-based questionnaire survey on musculoskeletal pain were sent a second questionnaire after 6 months; 1,571 (85 percent) participated. For subjects with low back pain at baseline, a high level of pain catastrophizing predicted low back pain at follow-up (odds ratio (OR) = 1.7, 95% confidence interval (CI): 1.0, 2.8) and chronic low back pain (OR = 1.7, 95% CI: 1.0, 2.3), in particular severe low back pain (OR = 3.0, 95% CI: 1.7, 5.2) and low back pain with disability (OR = 3.0, 95% CI: 1.7, 5.4). A high level of kinesiophobia showed similar associations. The significant associations remained after adjustment for pain duration, pain severity, or disability at baseline. For those without low back pain at baseline, a high level of pain catastrophizing or kinesiophobia predicted low back pain with disability during follow-up. These cognitive and emotional factors should be considered when prevention programs are developed for chronic low back pain and related disability.",
"title": ""
},
{
"docid": "7d285ca842be3d85d218dd70f851194a",
"text": "CONTEXT\nThe Atkins diet books have sold more than 45 million copies over 40 years, and in the obesity epidemic this diet and accompanying Atkins food products are popular. The diet claims to be effective at producing weight loss despite ad-libitum consumption of fatty meat, butter, and other high-fat dairy products, restricting only the intake of carbohydrates to under 30 g a day. Low-carbohydrate diets have been regarded as fad diets, but recent research questions this view.\n\n\nSTARTING POINT\nA systematic review of low-carbohydrate diets found that the weight loss achieved is associated with the duration of the diet and restriction of energy intake, but not with restriction of carbohydrates. Two groups have reported longer-term randomised studies that compared instruction in the low-carbohydrate diet with a low-fat calorie-reduced diet in obese patients (N Engl J Med 2003; 348: 2082-90; Ann Intern Med 2004; 140: 778-85). Both trials showed better weight loss on the low-carbohydrate diet after 6 months, but no difference after 12 months. WHERE NEXT?: The apparent paradox that ad-libitum intake of high-fat foods produces weight loss might be due to severe restriction of carbohydrate depleting glycogen stores, leading to excretion of bound water, the ketogenic nature of the diet being appetite suppressing, the high protein-content being highly satiating and reducing spontaneous food intake, or limited food choices leading to decreased energy intake. Long-term studies are needed to measure changes in nutritional status and body composition during the low-carbohydrate diet, and to assess fasting and postprandial cardiovascular risk factors and adverse effects. Without that information, low-carbohydrate diets cannot be recommended.",
"title": ""
},
{
"docid": "fa91331ef31de20ae63cc6c8ab33f062",
"text": "Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.",
"title": ""
},
{
"docid": "de70b208289bad1bc410bcb7a76e56df",
"text": "Instant Messaging chat sessions are realtime text-based conversations which can be analyzed using dialogue-act models. We describe a statistical approach for modelling and detecting dialogue acts in Instant Messaging dialogue. This involved the collection of a small set of task-based dialogues and annotating them with a revised tag set. We then dealt with segmentation and synchronisation issues which do not arise in spoken dialogue. The model we developed combines naive Bayes and dialogue-act n-grams to obtain better than 80% accuracy in our tagging experiment.",
"title": ""
},
{
"docid": "001b3155f0d67fd153173648cd483ac2",
"text": "A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.",
"title": ""
},
{
"docid": "13503c2cb633e162f094727df62092d3",
"text": "In this article, we investigate word sense distributions in noun compounds (NCs). Our primary goal is to disambiguate the word sense of component words in NCs, based on investigation of “semantic collocation” between them. We use sense collocation and lexical substitution to build supervised and unsupervised word sense disambiguation (WSD) classifiers, and show our unsupervised learner to be superior to a benchmark WSD system. Further, we develop a word sense-based approach to interpreting the semantic relations in NCs.",
"title": ""
},
{
"docid": "a278abfa0501077eb2f71cbb272689d6",
"text": "Among the many emerging non-volatile memory technologies, chalcogenide (i.e. GeSbTe/GST) based phase change random access memory (PRAM) has shown particular promise. While accurate simulations are required for reducing programming current and enabling higher integration density, many challenges remain for improved simulation of PRAM cell operation including nanoscale thermal conduction and phase change. This work simulates the fully coupled electrical and thermal transport and phase change in 2D PRAM geometries, with specific attention to the impact of thermal boundary resistance between the GST and surrounding materials. For GST layer thicknesses between 25 and 75nm, the interface resistance reduces the predicted programming current and power by 31% and 53%, respectively, for a typical reset transition. The calculations also show the large sensitivity of programming voltage to the GST thermal conductivity. These results show the importance of temperature-dependent thermal properties of materials and interfaces in PRAM cells",
"title": ""
},
{
"docid": "2e65ae613aa80aac27d5f8f6e00f5d71",
"text": "Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215",
"title": ""
},
{
"docid": "2fe1ed0f57e073372e4145121e87d7c6",
"text": "Information visualization (InfoVis), the study of transforming data, information, and knowledge into interactive visual representations, is very important to users because it provides mental models of information. The boom in big data analytics has triggered broad use of InfoVis in a variety of domains, ranging from finance to sports to politics. In this paper, we present a comprehensive survey and key insights into this fast-rising area. The research on InfoVis is organized into a taxonomy that contains four main categories, namely empirical methodologies, user interactions, visualization frameworks, and applications, which are each described in terms of their major goals, fundamental principles, recent trends, and state-of-the-art approaches. At the conclusion of this survey, we identify existing technical challenges and propose directions for future research.",
"title": ""
},
{
"docid": "20cc5c4aa870918f123e78490d5a5a73",
"text": "The interest and demand for female genital rejuvenation surgery are steadily increasing. This report presents a concept of genital beautification consisting of labia minora reduction, labia majora augmentation by autologous fat transplantation, labial brightening by laser, mons pubis reduction by liposuction, and vaginal tightening if desired. Genital beautification was performed for 124 patients between May 2009 and January 2012 and followed up for 1 year to obtain data about satisfaction with the surgery. Of the 124 female patients included in the study, 118 (95.2 %) were happy and 4 (3.2 %) were very happy with their postoperative appearance. In terms of postoperative functionality, 84 patients (67.7 %) were happy and 40 (32.3 %) were very happy. Only 2 patients (1.6 %) were not satisfied with the aesthetic result of their genital beautification procedures, and 10 patients (8.1 %) experienced wound dehiscence. The described technique of genital beautification combines different aesthetic female genital surgery techniques. Like other aesthetic surgeries, these procedures are designed for the subjective improvement of the appearance and feelings of the patients. The effects of the operation are functional and psychological. They offer the opportunity for sexual stimulation and satisfaction. The complication rate is low. Superior aesthetic results and patient satisfaction can be achieved by applying this technique. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .",
"title": ""
},
{
"docid": "6be67fd8fb351d779c355762e188809b",
"text": "Analysis and examination of data is performed in digital forensics. Nowadays computer is the major source of communication which can also be used by the investigators to gain forensically relevant information. Forensic analysis can be done in static and live modes. Traditional approach provides incomplete evidentiary data, while live analysis tools can provide the investigators a more accurate and consistent picture of the current and previously running processes. Many important system related information present in volatile memory cannot be effectively recovered by using static analysis techniques. In this paper, we present a critical review of static and live analysis approaches and we evaluate the reliability of different tools and techniques used in static and live digital forensic analysis.",
"title": ""
},
{
"docid": "3477975d58a4b30a636108e1c11f5e61",
"text": "In this paper, an output feedback nonlinear control is proposed for a hydraulic system with mismatched modeling uncertainties in which an extended state observer (ESO) and a nonlinear robust controller are synthesized via the backstepping method. The ESO is designed to estimate not only the unmeasured system states but also the modeling uncertainties. The nonlinear robust controller is designed to stabilize the closed-loop system. The proposed controller accounts for not only the nonlinearities (e.g., nonlinear flow features of servovalve), but also the modeling uncertainties (e.g., parameter derivations and unmodeled dynamics). Furthermore, the controller theoretically guarantees a prescribed tracking transient performance and final tracking accuracy, while achieving asymptotic tracking performance in the absence of time-varying uncertainties, which is very important for high-accuracy tracking control of hydraulic servo systems. Extensive comparative experimental results are obtained to verify the high-performance nature of the proposed control strategy.",
"title": ""
},
{
"docid": "03dc2c32044a41715991d900bb7ec783",
"text": "The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. We describe a latent factor approach to infer invariants underlying system variables and how we can leverage these relationships to monitor a cyber-physical system. In particular we illustrate how this approach helps rapidly identify outliers during system operation.",
"title": ""
},
{
"docid": "768ed187f94163727afd011817a306c6",
"text": "Although interest regarding the role of dispositional affect in job behaviors has surged in recent years, the true magnitude of affectivity's influence remains unknown. To address this issue, the authors conducted a qualitative and quantitative review of the relationships between positive and negative affectivity (PA and NA, respectively) and various performance dimensions. A series of meta-analyses based on 57 primary studies indicated that PA and NA predicted task performance in the hypothesized directions and that the relationships were strongest for subjectively rated versus objectively rated performance. In addition, PA was related to organizational citizenship behaviors but not withdrawal behaviors, and NA was related to organizational citizenship behaviors, withdrawal behaviors, counterproductive work behaviors, and occupational injury. Mediational analyses revealed that affect operated through different mechanisms in influencing the various performance dimensions. Regression analyses documented that PA and NA uniquely predicted task performance but that extraversion and neuroticism did not, when the four were considered simultaneously. Discussion focuses on the theoretical and practical implications of these findings. (PsycINFO Database Record (c) 2009 APA, all rights reserved).",
"title": ""
},
{
"docid": "b80df19e67d2bbaabf4da18d7b5af4e2",
"text": "This paper presents a data-driven approach for automatically generating cartoon faces in different styles from a given portrait image. Our stylization pipeline consists of two steps: an offline analysis step to learn about how to select and compose facial components from the databases; a runtime synthesis step to generate the cartoon face by assembling parts from a database of stylized facial components. We propose an optimization framework that, for a given artistic style, simultaneously considers the desired image-cartoon relationships of the facial components and a proper adjustment of the image composition. We measure the similarity between facial components of the input image and our cartoon database via image feature matching, and introduce a probabilistic framework for modeling the relationships between cartoon facial components. We incorporate prior knowledge about image-cartoon relationships and the optimal composition of facial components extracted from a set of cartoon faces to maintain a natural, consistent, and attractive look of the results. We demonstrate generality and robustness of our approach by applying it to a variety of portrait images and compare our output with stylized results created by artists via a comprehensive user study.",
"title": ""
},
{
"docid": "8966f87b2441cc2c348e25e3503e766c",
"text": "Fuzzing is a simple yet effective approach to discover software bugs utilizing randomly generated inputs. However, it is limited by coverage and cannot find bugs hidden in deep execution paths of the program because the randomly generated inputs fail complex sanity checks, e.g., checks on magic values, checksums, or hashes. To improve coverage, existing approaches rely on imprecise heuristics or complex input mutation techniques (e.g., symbolic execution or taint analysis) to bypass sanity checks. Our novel method tackles coverage from a different angle: by removing sanity checks in the target program. T-Fuzz leverages a coverage-guided fuzzer to generate inputs. Whenever the fuzzer can no longer trigger new code paths, a light-weight, dynamic tracing based technique detects the input checks that the fuzzer-generated inputs fail. These checks are then removed from the target program. Fuzzing then continues on the transformed program, allowing the code protected by the removed checks to be triggered and potential bugs discovered. Fuzzing transformed programs to find bugs poses two challenges: (1) removal of checks leads to over-approximation and false positives, and (2) even for true bugs, the crashing input on the transformed program may not trigger the bug in the original program. As an auxiliary post-processing step, T-Fuzz leverages a symbolic execution-based approach to filter out false positives and reproduce true bugs in the original program. By transforming the program as well as mutating the input, T-Fuzz covers more code and finds more true bugs than any existing technique. We have evaluated T-Fuzz on the DARPA Cyber Grand Challenge dataset, LAVA-M dataset and 4 real-world programs (pngfix, tiffinfo, magick and pdftohtml). For the CGC dataset, T-Fuzz finds bugs in 166 binaries, Driller in 121, and AFL in 105. In addition, found 3 new bugs in previously-fuzzed programs and libraries.",
"title": ""
},
{
"docid": "ab8af5f48be6b0b7769b8875e528be84",
"text": "A feedback vertex set of a graph is a subset of vertices that contains at least one vertex from every cycle in the graph. The problem considered is that of finding a minimum feedback vertex set given a weighted and undirected graph. We present a simple and efficient approximation algorithm with performance ratio of at most 2, improving previous best bounds for either weighted or unweighted cases of the problem. Any further improvement on this bound, matching the best constant factor known for the vertex cover problem, is deemed challenging. The approximation principle, underlying the algorithm, is based on a generalized form of the classical local ratio theorem, originally developed for approximation of the vertex cover problem, and a more flexible style of its application.",
"title": ""
},
{
"docid": "e2de8284e14cb3abbd6e3fbcfb5bc091",
"text": "In this paper, novel 2 one-dimensional (1D) Haar-like filtering techniques are proposed as a new and low calculation cost feature extraction method suitable for 3D acceleration signals based human activity recognition. Proposed filtering method is a simple difference filter with variable filter parameters. Our method holds a strong adaptability to various classification problems which no previously studied features (mean, standard deviation, etc.) possessed. In our experiment on human activity recognition, the proposed method achieved both the highest recognition accuracy of 93.91% while reducing calculation cost to 21.22% compared to previous method.",
"title": ""
},
{
"docid": "ac5e7e88d965aa695b8ae169edce2426",
"text": "Randomness test suites constitute an essential component within the process of assessing random number generators in view of determining their suitability for a specific application. Evaluating the randomness quality of random numbers sequences produced by a given generator is not an easy task considering that no finite set of statistical tests can assure perfect randomness, instead each test attempts to rule out sequences that show deviation from perfect randomness by means of certain statistical properties. This is the reason why several batteries of statistical tests are applied to increase the confidence in the selected generator. Therefore, in the present context of constantly increasing volumes of random data that need to be tested, special importance has to be given to the performance of the statistical test suites. Our work enrolls in this direction and this paper presents the results on improving the well known NIST Statistical Test Suite (STS) by introducing parallelism and a paradigm shift towards byte processing delivering a design that is more suitable for today's multicore architectures. Experimental results show a very significant speedup of up to 103 times compared to the original version.",
"title": ""
}
] |
scidocsrr
|
dccdf5cb70bfa68ed24161044a913941
|
Automatic Keyphrase Extraction via Topic Decomposition
|
[
{
"docid": "1714f89263c0c455d3c8ae1a358de9ee",
"text": "In this paper, we introduce and compare between two novel approaches, supervised and unsupervised, for identifying the keywords to be used in extractive summarization of text documents. Both our approaches are based on the graph-based syntactic representation of text and web documents, which enhances the traditional vector-space model by taking into account some structural document features. In the supervised approach, we train classification algorithms on a summarized collection of documents with the purpose of inducing a keyword identification model. In the unsupervised approach, we run the HITS algorithm on document graphs under the assumption that the top-ranked nodes should represent the document keywords. Our experiments on a collection of benchmark summaries show that given a set of summarized training documents, the supervised classification provides the highest keyword identification accuracy, while the highest F-measure is reached with a simple degree-based ranking. In addition, it is sufficient to perform only the first iteration of HITS rather than running it to its convergence.",
"title": ""
},
{
"docid": "1af7a41e5cac72ed9245b435c463b366",
"text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.",
"title": ""
}
] |
[
{
"docid": "5e435e0bd1ebdd1f86b57e40fc047366",
"text": "Deep clustering is a recently introduced deep learning architecture that uses discriminatively trained embeddings as the basis for clustering. It was recently applied to spectrogram segmentation, resulting in impressive results on speaker-independent multi-speaker separation. In this paper we extend the baseline system with an end-to-end signal approximation objective that greatly improves performance on a challenging speech separation. We first significantly improve upon the baseline system performance by incorporating better regularization, larger temporal context, and a deeper architecture, culminating in an overall improvement in signal to distortion ratio (SDR) of 10.3 dB compared to the baseline of 6.0 dB for two-speaker separation, as well as a 7.1 dB SDR improvement for three-speaker separation. We then extend the model to incorporate an enhancement layer to refine the signal estimates, and perform end-to-end training through both the clustering and enhancement stages to maximize signal fidelity. We evaluate the results using automatic speech recognition. The new signal approximation objective, combined with end-to-end training, produces unprecedented performance, reducing the word error rate (WER) from 89.1% down to 30.8%. This represents a major advancement towards solving the cocktail party problem.",
"title": ""
},
{
"docid": "6d096dc86d240370bef7cc4e4cdd12e5",
"text": "Modern software systems are subject to uncertainties, such as dynamics in the availability of resources or changes of system goals. Self-adaptation enables a system to reason about runtime models to adapt itself and realises its goals under uncertainties. Our focus is on providing guarantees for adaption goals. A prominent approach to provide such guarantees is automated verification of a stochastic model that encodes up-to-date knowledge of the system and relevant qualities. The verification results allow selecting an adaption option that satisfies the goals. There are two issues with this state of the art approach: i) changing goals at runtime (a challenging type of uncertainty) is difficult, and ii) exhaustive verification suffers from the state space explosion problem. In this paper, we propose a novel modular approach for decision making in self-adaptive systems that combines distinct models for each relevant quality with runtime simulation of the models. Distinct models support on the fly changes of goals. Simulation enables efficient decision making to select an adaptation option that satisfies the system goals. The tradeoff is that simulation results can only provide guarantees with a certain level of accuracy. We demonstrate the benefits and tradeoffs of the approach for a service-based telecare system.",
"title": ""
},
{
"docid": "5f528e90763ef96cd812f2b9c2c42de6",
"text": "Many blind motion deblur methods model the motion blur as a spatially invariant convolution process. However, motion blur caused by the camera movement in 3D space during shutter time often leads to spatially varying blurring effect over the image. In this paper, we proposed an efficient two-stage approach to remove spatially-varying motion blurring from a single photo. There are three main components in our approach: (i) a minimization method of estimating region-wise blur kernels by using both image information and correlations among neighboring kernels, (ii) an interpolation scheme of constructing pixel-wise blur matrix from region-wise blur kernels, and (iii) a non-blind deblurring method robust to kernel errors. The experiments showed that the proposed method outperformed the existing software based approaches on tested real images.",
"title": ""
},
{
"docid": "5f2c53865316c1eb47fc734f53e10b00",
"text": "In recent years we have witnessed a proliferation of data structure and algorithm proposals for efficient deep packet inspection on memory based architectures. In parallel, we have observed an increasing interest in network processors as target architectures for high performance networking applications.\n In this paper we explore design alternatives in the implementation of regular expression matching architectures on network processors (NPs) and general purpose processors (GPPs). Specifically, we present a performance evaluation on an Intel IXP2800 NP, on an Intel Xeon GPP and on a multiprocessor system consisting of four AMD Opteron 850 cores. Our study shows how to exploit the Intel IXP2800 architectural features in order to maximize system throughput, identifies and evaluates algorithmic and architectural trade-offs and limitations, and highlights how the presence of caches affects the overall performances. We provide an implementation of our NP designs within the Open Network Laboratory (http://www.onl.wustl.edu).",
"title": ""
},
{
"docid": "869e01855c8cfb9dc3e64f7f3e73cd60",
"text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.",
"title": ""
},
{
"docid": "7c525afc11c41e0a8ca6e8c48bdec97c",
"text": "AT commands, originally designed in the early 80s for controlling modems, are still in use in most modern smartphones to support telephony functions. The role of AT commands in these devices has vastly expanded through vendor-specific customizations, yet the extent of their functionality is unclear and poorly documented. In this paper, we systematically retrieve and extract 3,500 AT commands from over 2,000 Android smartphone firmware images across 11 vendors. We methodically test our corpus of AT commands against eight Android devices from four different vendors through their USB interface and characterize the powerful functionality exposed, including the ability to rewrite device firmware, bypass Android security mechanisms, exfiltrate sensitive device information, perform screen unlocks, and inject touch events solely through the use of AT commands. We demonstrate that the AT command interface contains an alarming amount of unconstrained functionality and represents a broad attack surface on Android devices.",
"title": ""
},
{
"docid": "3e850a45249f45e95d1a7413e7b142f1",
"text": "In our increasingly “data-abundant” society, remote sensing big data perform massive, high dimension and heterogeneity features, which could result in “dimension disaster” to various extent. It is worth mentioning that the past two decades have witnessed a number of dimensional reductions to weak the spatiotemporal redundancy and simplify the calculation in remote sensing information extraction, such as the linear learning methods or the manifold learning methods. However, the “crowding” and mixing when reducing dimensions of remote sensing categories could degrade the performance of existing techniques. Then in this paper, by analyzing probability distribution of pairwise distances among remote sensing datapoints, we use the 2-mixed Gaussian model(GMM) to improve the effectiveness of the theory of t-Distributed Stochastic Neighbor Embedding (t-SNE). A basic reducing dimensional model is given to test our proposed methods. The experiments show that the new probability distribution capable retains the local structure and significantly reveals differences between categories in a global structure.",
"title": ""
},
{
"docid": "1a59bf4467e73a6cae050e5670dbf4fa",
"text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).",
"title": ""
},
{
"docid": "ecabfcbb40fc59f1d1daa02502164b12",
"text": "We present a generalized line histogram technique to compute global rib-orientation for detecting rotated lungs in chest radiographs. We use linear structuring elements, such as line seed filters, as kernels to convolve with edge images, and extract a set of lines from the posterior rib-cage. After convolving kernels in all possible orientations in the range [0, π], we measure the angle for which the line histogram has maximum magnitude. This measure provides a good approximation of the global chest rib-orientation for each lung. A chest radiograph is said to be upright if the difference between the orientation angles of both lungs with respect to the horizontal axis, is negligible. We validate our method on sets of normal and abnormal images and argue that rib orientation can be used for rotation detection in chest radiographs as aid in quality control during image acquisition, and to discard images from training and testing data sets. In our test, we achieve a maximum accuracy of 90%.",
"title": ""
},
{
"docid": "ab57df7702fa8589f7d462c80d9a2598",
"text": "The Internet of Things (IoT) allows machines and devices in the world to connect with each other and generate a huge amount of data, which has a great potential to provide useful knowledge across service domains. Combining the context of IoT with semantic technologies, we can build integrated semantic systems to support semantic interoperability. In this paper, we propose an integrated semantic service platform (ISSP) to support ontological models in various IoT-based service domains of a smart city. In particular, we address three main problems for providing integrated semantic services together with IoT systems: semantic discovery, dynamic semantic representation, and semantic data repository for IoT resources. To show the feasibility of the ISSP, we develop a prototype service for a smart office using the ISSP, which can provide a preset, personalized office environment by interpreting user text input via a smartphone. We also discuss a scenario to show how the ISSP-based method would help build a smart city, where services in each service domain can discover and exploit IoT resources that are wanted across domains. We expect that our method could eventually contribute to providing people in a smart city with more integrated, comprehensive services based on semantic interoperability.",
"title": ""
},
{
"docid": "dd270ffa800d633a7a354180eb3d426c",
"text": "I have taken an experimental approach to this question. Freely voluntary acts are pre ceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility. But the deeper question still remains: Are freely voluntary acts subject to macro deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites.",
"title": ""
},
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
},
{
"docid": "afac9140d183eac56785b26069953342",
"text": "Big Data means extremely huge large data sets that can be analyzed to find patterns, trends. One technique that can be used for data analysis so that able to help us find abstract patterns in Big Data is Deep Learning. If we apply Deep Learning to Big Data, we can find unknown and useful patterns that were impossible so far. With the help of Deep Learning, AI is getting smart. There is a hypothesis in this regard, the more data, the more abstract knowledge. So a handy survey of Big Data, Deep Learning and its application in Big Data is necessary. In this paper, we provide a comprehensive survey on what is Big Data, comparing methods, its research problems, and trends. Then a survey of Deep Learning, its methods, comparison of frameworks, and algorithms is presented. And at last, application of Deep Learning in Big Data, its challenges, open research problems and future trends are presented.",
"title": ""
},
{
"docid": "cc93f5a421ad0e5510d027b01582e5ae",
"text": "This paper assesses the impact of financial reforms in Zimbabwe on savings and credit availability to small and medium scale enterprises (SMEs) and the poor. We established that the reforms improved domestic savings mobilization due to high deposit rates, the emergence of new financial institutions and products and the general increase in real incomes after the 1990 economic reforms. The study uncovered that inflation and real income were the major determinants of savings during the sample period. High lending rates and the use of conventional lending methodologies by banks restricted access to credit by the SMEs and the poor. JEL Classification Numbers: E21, O16.",
"title": ""
},
{
"docid": "c526e32c9c8b62877cb86bc5b097e2cf",
"text": "This paper proposes a new field of user interfaces called multi-computer direct manipulation and presents a penbased direct manipulation technique that can be used for data transfer between different computers as well as within the same computer. The proposed Pick-andDrop allows a user to pick up an object on a display and drop it on another display as if he/she were manipulating a physical object. Even though the pen itself does not have storage capabilities, a combination of Pen-ID and the pen manager on the network provides the illusion that the pen can physically pick up and move a computer object. Based on this concept, we have built several experimental applications using palm-sized, desk-top, and wall-sized pen computers. We also considered the importance of physical artifacts in designing user interfaces in a future computing environment.",
"title": ""
},
{
"docid": "129a85f7e611459cf98dc7635b44fc56",
"text": "Pain in the oral and craniofacial system represents a major medical and social problem. Indeed, a U.S. Surgeon General’s report on orofacial health concludes that, ‘‘. . .oral health means much more than healthy teeth. It means being free of chronic oral-facial pain conditions. . .’’ [172]. Community-based surveys indicate that many subjects commonly report pain in the orofacial region, with estimates of >39 million, or 22% of Americans older than 18 years of age, in the United States alone [108]. Other population-based surveys conducted in the United Kingdom [111,112], Germany [91], or regional pain care centers in the United States [54] report similar occurrence rates [135]. Importantly, chronic widespread body pain, patient sex and age, and psychosocial factors appear to serve as risk factors for chronic orofacial pain [1,2,92,99,138]. In addition to its high degree of prevalence, the reported intensities of various orofacial pain conditions are similar to that observed with many spinal pain disorders (Fig. 1). Moreover, orofacial pain is derived from many unique target tissues, such as the meninges, cornea, tooth pulp, oral/ nasal mucosa, and temporomandibular joint (Fig. 2), and thus has several unique physiologic characteristics compared with the spinal nociceptive system [23]. Given these considerations, it is not surprising that accurate diagnosis and effective management of orofacial pain conditions represents a significant health care problem. Publications in the field of orofacial pain demonstrate a steady increase over the last several decades (Fig. 3). This is a complex literature; a recent bibliometric analysis of orofacial pain articles published in 2004–2005 indicated that 975 articles on orofacial pain were published in 275 journals from authors representing 54 countries [142]. Thus, orofacial pain disorders represent a complex constellation of conditions with an equally diverse literature base. Accordingly, this review will focus on a summary of major research foci on orofacial pain without attempting to provide a comprehensive review of the entire literature.",
"title": ""
},
{
"docid": "0b25f2989e18d04a9262b4a6fe107c9f",
"text": "Delay Tolerant Networking has been a hot topic of interest in networking since the start of the century, and has sparked a significant amount of research in the area, particularly in an age where the ultimate goal is to provide ubiquitous connectivity, even in regions previously considered inaccessible. Protocols and applications in popular use on the Internet are not readily applicable to such networks, that are characterized by long delays and inconsistent connectivity. In this paper, we summarize the wealth of literature in this field in the form of a concise, but comprehensive tutorial. The paper is designed to bring researchers new to the field with a general picture of the state of the art in this area, and motivate them to begin exploring problems in the field quickly.",
"title": ""
},
{
"docid": "3d390bed1ca485abd79073add7e781ba",
"text": "Predicting the future to anticipate the outcome of events and actions is a critical attribute of autonomous agents; particularly for agents which must rely heavily on real time visual data for decision making. Working towards this capability, we address the task of predicting future frame segmentation from a stream of monocular video by leveraging the 3D structure of the scene. Our framework is based on learnable sub-modules capable of predicting pixel-wise scene semantic labels, depth, and camera ego-motion of adjacent frames. We further propose a recurrent neural network based model capable of predicting future ego-motion trajectory as a function of a series of past ego-motion steps. Ultimately, we observe that leveraging 3D structure in the model facilitates successful prediction, achieving state of the art accuracy in future semantic segmentation.",
"title": ""
},
{
"docid": "adeebdc680819ca992f9d53e4866122a",
"text": "Large numbers of black kites (Milvus migrans govinda) forage with house crows (Corvus splendens) at garbage dumps in many Indian cities. Such aggregation of many individuals results in aggressiveness where adoption of a suitable behavioral approach is crucial. We studied foraging behavior of black kites in dumping sites adjoining two major corporation markets of Kolkata, India. Black kites used four different foraging tactics which varied and significantly influenced foraging attempts and their success rates. Kleptoparasitism was significantly higher than autonomous foraging events; interspecific kleptoparasitism was highest in occurrence with a low success rate, while ‘autonomous-ground’ was least adopted but had the highest success rate.",
"title": ""
},
{
"docid": "427970a79aa36ec6b1c9db08d093c6d0",
"text": "Recommendation system provides the facility to understand a person's taste and find new, desirable content for them automatically based on the pattern between their likes and rating of different items. In this paper, we have proposed a recommendation system for the large amount of data available on the web in the form of ratings, reviews, opinions, complaints, remarks, feedback, and comments about any item (product, event, individual and services) using Hadoop Framework. We have implemented Mahout Interfaces for analyzing the data provided by review and rating site for movies.",
"title": ""
}
] |
scidocsrr
|
8a519abac6a583ebb89fc1ac8d42a377
|
Midface: Clinical Anatomy and Regional Approaches with Injectable Fillers.
|
[
{
"docid": "f2e13ac41fc61bfc1b8e9c7171608518",
"text": "BACKGROUND\nThe exact anatomical cause of the tear trough remains undefined. This study was performed to identify the anatomical basis for the tear trough deformity.\n\n\nMETHODS\nForty-eight cadaveric hemifaces were dissected. With the skin over the midcheek intact, the tear trough area was approached through the preseptal space above and prezygomatic space below. The origins of the palpebral and orbital parts of the orbicularis oculi (which sandwich the ligament) were released meticulously from the maxilla, and the tear trough ligament was isolated intact and in continuity with the orbicularis retaining ligament. The ligaments were submitted for histologic analysis.\n\n\nRESULTS\nA true osteocutaneous ligament called the tear trough ligament was consistently found on the maxilla, between the palpebral and orbital parts of the orbicularis oculi, cephalad and caudal to the ligament, respectively. It commences medially, at the level of the insertion of the medial canthal tendon, just inferior to the anterior lacrimal crest, to approximately the medial-pupil line, where it continues laterally as the bilayered orbicularis retaining ligament. Histologic evaluation confirmed the ligamentous nature of the tear trough ligament, with features identical to those of the zygomatic ligament.\n\n\nCONCLUSIONS\nThis study clearly demonstrated that the prominence of the tear trough has its anatomical origin in the tear trough ligament. This ligament has not been isolated previously using standard dissection, but using the approach described, the tear trough ligament is clearly seen. The description of this ligament sheds new light on considerations when designing procedures to address the tear trough and the midcheek.",
"title": ""
}
] |
[
{
"docid": "098b9b80d27fddd6407ada74a8fd4590",
"text": "We have developed a 1.55-μm 40 Gbps electro-absorption modulator laser (EML)-based transmitter optical subassembly (TOSA) using a novel flexible printed circuit (FPC). The return loss at the junctions of the printed circuit board and the FPC, and of the FPC and the ceramic feedthrough connection was held better than 20 dB at up to 40 GHz by a newly developed three-layer FPC. The TOSA was fabricated and demonstrated a mask margin of >16% and a path penalty of <;0.63 dB for a 43 Gbps signal after 2.4-km SMF transmission over the entire case temperature range from -5° to 80 °C, demonstrating compliance with ITU-T G.693. These results are comparable to coaxial connector type EML modules. This TOSA is expected to be a strong candidate for 40 Gbps EML modules with excellent operating characteristics, economy, and a small footprint.",
"title": ""
},
{
"docid": "93efc06a282a12fb65038381cf390e19",
"text": "Linked Open Data (LOD) comprises an unprecedented volume of structured data on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced or extracted data of often relatively low quality. We present a methodology for test-driven quality assessment of Linked Data, which is inspired by test-driven software development. We argue that vocabularies, ontologies and knowledge bases should be accompanied by a number of test cases, which help to ensure a basic level of quality. We present a methodology for assessing the quality of linked data resources, based on a formalization of bad smells and data quality problems. Our formalization employs SPARQL query templates, which are instantiated into concrete quality test case queries. Based on an extensive survey, we compile a comprehensive library of data quality test case patterns. We perform automatic test case instantiation based on schema constraints or semi-automatically enriched schemata and allow the user to generate specific test case instantiations that are applicable to a schema or dataset. We provide an extensive evaluation of five LOD datasets, manual test case instantiation for five schemas and automatic test case instantiations for all available schemata registered with Linked Open Vocabularies (LOV). One of the main advantages of our approach is that domain specific semantics can be encoded in the data quality test cases, thus being able to discover data quality problems beyond conventional quality heuristics.",
"title": ""
},
{
"docid": "da02328df767c4046a352e999914bc20",
"text": "We propose a multi-input multi-output fully convolutional neural network model for MRI synthesis. The model is robust to missing data, as it benefits from, but does not require, additional input modalities. The model is trained end-to-end, and learns to embed all input modalities into a shared modality-invariant latent space. These latent representations are then combined into a single fused representation, which is transformed into the target output modality with a learnt decoder. We avoid the need for curriculum learning by exploiting the fact that the various input modalities are highly correlated. We also show that by incorporating information from segmentation masks the model can both decrease its error and generate data with synthetic lesions. We evaluate our model on the ISLES and BRATS data sets and demonstrate statistically significant improvements over state-of-the-art methods for single input tasks. This improvement increases further when multiple input modalities are used, demonstrating the benefits of learning a common latent space, again resulting in a statistically significant improvement over the current best method. Finally, we demonstrate our approach on non skull-stripped brain images, producing a statistically significant improvement over the previous best method. Code is made publicly available at https://github.com/agis85/multimodal_brain_synthesis.",
"title": ""
},
{
"docid": "96b1688b19bf71e8f1981d9abe52fc2c",
"text": "Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How?” and “Why?” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.",
"title": ""
},
{
"docid": "3a92c6ba669ae5002979e4347434a120",
"text": "This paper highlights a 14nm Analog and RF technology based on a logic FinFET platform for the first time. An optimized RF device layout shows excellent Ft/Fmax of (314GHz/180GHz) and (285GHz/140GHz) for NFET and PFET respectively. A higher PFET RF performance compared to 28nm technology is due to a source/drain stressor mobility improvement. A benefit of better FinFET channel electrostatics can be seen in the self-gain (Gm/Gds), which shows a significant increase to 40 and 34 for NFET and PFET respectively. Superior 1/f noise of 17/35 f(V∗μm)2/Hz @ 1KHz for N/PFET respectively is also achieved. To extend further low voltage operation and power saving, ultra-low Vt devices are also developed. Furthermore, a deep N-well (triple well) process is introduced to improve the ultra-low signal immunity from substrate noise, while offering useful devices like VNPN and high breakdown voltage deep N-well diodes. A superior Ft/Fmax, high self-gain, low 1/f noise and substrate isolation characteristics truly extend the capability of the 14nm FinFETs for analog and RF applications.",
"title": ""
},
{
"docid": "072a203514eb53db7aa9aaa55c6745d8",
"text": "The possibility to estimate accurately the subsurface electric properties from ground-penetrating radar (GPR) signals using inverse modeling is obstructed by the appropriateness of the forward model describing the GPR subsurface system. In this paper, we improved the recently developed approach of Lambot et al. whose success relies on a stepped-frequency continuous-wave (SFCW) radar combined with an off-ground monostatic transverse electromagnetic horn antenna. This radar configuration enables realistic and efficient forward modeling. We included in the initial model: 1) the multiple reflections occurring between the antenna and the soil surface using a positive feedback loop in the antenna block diagram and 2) the frequency dependence of the electric properties using a local linear approximation of the Debye model. The model was validated in laboratory conditions on a tank filled with a two-layered sand subject to different water contents. Results showed remarkable agreement between the measured and modeled Green's functions. Model inversion for the dielectric permittivity further demonstrated the accuracy of the method. Inversion for the electric conductivity led to less satisfactory results. However, a sensitivity analysis demonstrated the good stability properties of the inverse solution and put forward the necessity to reduce the remaining clutter by a factor 10. This may partly be achieved through a better characterization of the antenna transfer functions and by performing measurements in an environment without close extraneous scatterers.",
"title": ""
},
{
"docid": "04c029380ae73b75388ab02f901fda7d",
"text": "We present a novel method to solve image analogy problems [3]: it allows to learn the relation between paired images present in training data, and then generalize and generate images that correspond to the relation, but were never seen in the training set. Therefore, we call the method Conditional Analogy Generative Adversarial Network (CAGAN), as it is based on adversarial training and employs deep convolutional neural networks. An especially interesting application of that technique is automatic swapping of clothing on fashion model photos. Our work has the following contributions. First, the definition of the end-to-end trainable CAGAN architecture, which implicitly learns segmentation masks without expensive supervised labeling data. Second, experimental results show plausible segmentation masks and often convincing swapped images, given the target article. Finally, we discuss the next steps for that technique: neural network architecture improvements and more advanced applications.",
"title": ""
},
{
"docid": "35c18e570a6ab44090c1997e7fe9f1b4",
"text": "Online information maintenance through cloud applications allows users to store, manage, control and share their information with other users as well as Cloud service providers. There have been serious privacy concerns about outsourcing user information to cloud servers. But also due to an increasing number of cloud data security incidents happened in recent years. Proposed system is a privacy-preserving system using Attribute based Multifactor Authentication. Proposed system provides privacy to users data with efficient authentication and store them on cloud servers such that servers do not have access to sensitive user information. Meanwhile users can maintain full control over access to their uploaded ?les and data, by assigning ?ne-grained, attribute-based access privileges to selected files and data, while di?erent users can have access to di?erent parts of the System. This application allows clients to set privileges to different users to access their data.",
"title": ""
},
{
"docid": "ba755cab267998a3ea813c0f46c8c99c",
"text": "In this paper, we developed a deep neural network (DNN) that learns to solve simultaneously the three tasks of the cQA challenge proposed by the SemEval-2016 Task 3, i.e., question-comment similarity, question-question similarity and new question-comment similarity. The latter is the main task, which can exploit the previous two for achieving better results. Our DNN is trained jointly on all the three cQA tasks and learns to encode questions and comments into a single vector representation shared across the multiple tasks. The results on the official challenge test set show that our approach produces higher accuracy and faster convergence rates than the individual neural networks. Additionally, our method, which does not use any manual feature engineering, approaches the state of the art established with methods that make heavy use of it.",
"title": ""
},
{
"docid": "754fb355da63d024e3464b4656ea5e8d",
"text": "Improvements in implant designs have helped advance successful immediate anterior implant placement into fresh extraction sockets. Clinical techniques described in this case enable practitioners to achieve predictable esthetic success using a method that limits the amount of buccal contour change of the extraction site ridge and potentially enhances the thickness of the peri-implant soft tissues coronal to the implant-abutment interface. This approach involves atraumatic tooth removal without flap elevation, and placing a bone graft into the residual gap around an immediate fresh-socket anterior implant with a screw-retained provisional restoration acting as a prosthetic socket seal device.",
"title": ""
},
{
"docid": "c4144108562408238992d7529cf77ad7",
"text": "This article presents an approach to context-aware services for “smart buildings” based on Web and Semantic Web techniques. The services striven for are first described, then their realization using Web and Semantic Web services is explained. Finally, advantages of the approach are stressed.",
"title": ""
},
{
"docid": "3f157067ce2d5d6b6b4c9d9faaca267b",
"text": "The rise of network forms of organization is a key consequence of the ongoing information revolution. Business organizations are being newly energized by networking, and many professional militaries are experimenting with flatter forms of organization. In this chapter, we explore the impact of networks on terrorist capabilities, and consider how this development may be associated with a move away from emphasis on traditional, episodic efforts at coercion to a new view of terror as a form of protracted warfare. Seen in this light, the recent bombings of U.S. embassies in East Africa, along with the retaliatory American missile strikes, may prove to be the opening shots of a war between a leading state and a terror network. We consider both the likely context and the conduct of such a war, and offer some insights that might inform policies aimed at defending against and countering terrorism.",
"title": ""
},
{
"docid": "cd2fcc3e8ba9fce3db77c4f1e04ad287",
"text": "Technological advances are being made to assist humans in performing ordinary tasks in everyday settings. A key issue is the interaction with objects of varying size, shape, and degree of mobility. Autonomous assistive robots must be provided with the ability to process visual data in real time so that they can react adequately for quickly adapting to changes in the environment. Reliable object detection and recognition is usually a necessary early step to achieve this goal. In spite of significant research achievements, this issue still remains a challenge when real-life scenarios are considered. In this article, we present a vision system for assistive robots that is able to detect and recognize objects from a visual input in ordinary environments in real time. The system computes color, motion, and shape cues, combining them in a probabilistic manner to accurately achieve object detection and recognition, taking some inspiration from vision science. In addition, with the purpose of processing the input visual data in real time, a graphical processing unit (GPU) has been employed. The presented approach has been implemented and evaluated on a humanoid robot torso located at realistic scenarios. For further experimental validation, a public image repository for object recognition has been used, allowing a quantitative comparison with respect to other state-of-the-art techniques when realworld scenes are considered. Finally, a temporal analysis of the performance is provided with respect to image resolution and the number of target objects in the scene.",
"title": ""
},
{
"docid": "fd256fe226d32fab1fca93be1d08ed32",
"text": "Data security in the cloud is a big concern that blocks the widespread use of the cloud for relational data management. First, to ensure data security, data confidentiality needs to be provided when data resides in storage as well as when data is dynamically accessed by queries. Prior works on query processing on encrypted data did not provide data confidentiality guarantees in both aspects. Tradeoff between secrecy and efficiency needs to be made when satisfying both aspects of data confidentiality while being suitable for practical use. Second, to support common relational data management functions, various types of queries such as exact queries, range queries, data updates, insertion and deletion should be supported. To address these issues, this paper proposes a comprehensive framework for secure and efficient query processing of relational data in the cloud. Our framework ensures data confidentiality using a salted IDA encoding scheme and column-access-via-proxy query processing primitives, and ensures query efficiency using matrix column accesses and a secure B+-tree index. In addition, our framework provides data availability and integrity. We establish the security of our proposal by a detailed security analysis and demonstrate the query efficiency of our proposal through an experimental evaluation.",
"title": ""
},
{
"docid": "595cb7698c38b9f5b189ded9d270fe69",
"text": "Sentiment Analysis can help to extract knowledge related to opinions and emotions from user generated text information. It can be applied in medical field for patients monitoring purposes. With the availability of large datasets, deep learning algorithms have become a state of the art also for sentiment analysis. However, deep models have the drawback of not being non human-interpretable, raising various problems related to model’s interpretability. Very few work have been proposed to build models that explain their decision making process and actions. In this work, we review the current sentiment analysis approaches and existing explainable systems. Moreover, we present a critical review of explainable sentiment analysis models and discussed the insight of applying explainable sentiment analysis in the medical field.",
"title": ""
},
{
"docid": "b3f5d9335cccf62797c86b76fa2c9e7e",
"text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017",
"title": ""
},
{
"docid": "5551eb3819e33d5eeaadf3ebb636d961",
"text": "There are many problems in security of Internet of Things (IOT) crying out for solutions, such as RFID tag security, wireless security, network transmission security, privacy protection, information processing security. This article is based on the existing researches of network security technology. And it provides a new approach for researchers in certain IOT application and design, through analyzing and summarizing the security of ITO from various angles.",
"title": ""
},
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
},
{
"docid": "4ecacdd24a9e615f21535a026d9c2ab5",
"text": "Text superimposed on the video frames provides supplemental but important information for video indexing and retrieval. Many efforts have been made for videotext detection and recognition (Video OCR). The main difficulties of video OCR are the low resolution and the background complexity. In this paper, we present efficient schemes to deal with the second difficulty by sufficiently utilizing multiple frames that contain the same text to get every clear word from these frames. Firstly, we use multiple frame verification to reduce text detection false alarms. And then choose those frames where the text is most likely clear, thus it is more possible to be correctly recognized. We then detect and joint every clear text block from those frames to form a clearer “man-made” frame. Later we apply a block-based adaptive thresholding procedure on these “man-made” frames. Finally, the binarized frames are sent to OCR engine for recognition. Experiments show that the word recognition rate has been increased over 28% by these methods.",
"title": ""
},
{
"docid": "49cafb7a5a42b7a8f8260a398c390504",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
}
] |
scidocsrr
|
05a1475b53fac94da31c5be1309f4285
|
Machine Learning for Dialog State Tracking: A Review
|
[
{
"docid": "771b1e44b26f749f6ecd9fe515159d9c",
"text": "In spoken dialog systems, dialog state tracking refers to the task of correctly inferring the user's goal at a given turn, given all of the dialog history up to that turn. This task is challenging because of speech recognition and language understanding errors, yet good dialog state tracking is crucial to the performance of spoken dialog systems. This paper presents results from the third Dialog State Tracking Challenge, a research community challenge task based on a corpus of annotated logs of human-computer dialogs, with a blind test set evaluation. The main new feature of this challenge is that it studied the ability of trackers to generalize to new entities - i.e. new slots and values not present in the training data. This challenge received 28 entries from 7 research teams. About half the teams substantially exceeded the performance of a competitive rule-based baseline, illustrating not only the merits of statistical methods for dialog state tracking but also the difficulty of the problem.",
"title": ""
},
{
"docid": "1738a8ccb1860e5b85e2364f437d4058",
"text": "We describe a new algorithm for finding the hypothesis in a recognition lattice that is expected to minimize the word er ror rate (WER). Our approach thus overcomes the mismatch between the word-based performance metric and the standard MAP scoring paradigm that is sentence-based, and that can le ad to sub-optimal recognition results. To this end we first find a complete alignment of all words in the recognition lattice, identifying mutually supporting and competing word hypotheses . Finally, a new sentence hypothesis is formed by concatenating the words with maximal posterior probabilities. Experimental ly, this approach leads to a significant WER reduction in a large vocab ulary recognition task.",
"title": ""
},
{
"docid": "3d22f5be70237ae0ee1a0a1b52330bfa",
"text": "Tracking the user's intention throughout the course of a dialog, called dialog state tracking, is an important component of any dialog system. Most existing spoken dialog systems are designed to work in a static, well-defined domain, and are not well suited to tasks in which the domain may change or be extended over time. This paper shows how recurrent neural networks can be effectively applied to tracking in an extended domain with new slots and values not present in training data. The method is evaluated in the third Dialog State Tracking Challenge, where it significantly outperforms other approaches in the task of tracking the user's goal. A method for online unsupervised adaptation to new domains is also presented. Unsupervised adaptation is shown to be helpful in improving word-based recurrent neural networks, which work directly from the speech recognition results. Word-based dialog state tracking is attractive as it does not require engineering a spoken language understanding system for use in the new domain and it avoids the need for a general purpose intermediate semantic representation.",
"title": ""
}
] |
[
{
"docid": "56525ce9536c3c8ea03ab6852b854e95",
"text": "The Distributed Denial of Service (DDoS) attacks are a serious threat in today's Internet where packets from large number of compromised hosts block the path to the victim nodes and overload the victim servers. In the newly proposed future Internet Architecture, Named Data Networking (NDN), the architecture itself has prevention measures to reduce the overload to the servers. This on the other hand increases the work and security threats to the intermediate routers. Our project aims at identifying the DDoS attack in NDN which is known as Interest flooding attack, mitigate the consequence of it and provide service to the legitimate users. We have developed a game model for the DDoS attacks and provide possible countermeasures to stop the flooding of interests. Through this game theory model, we either forward or redirect or drop the incoming interest packets thereby reducing the PIT table consumption. This helps in identifying the nodes that send malicious interest packets and eradicate their actions of sending malicious interests further. The main highlight of this work is that we have implemented the Game Theory model in the NDN architecture. It was primarily imposed for the IP internet architecture.",
"title": ""
},
{
"docid": "2545ce26d8727fc9d0e855ebff1a7171",
"text": "The quality and speed of most texture synthesis algorithms depend on a 2D input sample that is small and contains enough texture variations. However, little research exists on how to acquire such sample. For homogeneous patterns this can be achieved via manual cropping, but no adequate solution exists for inhomogeneous or globally varying textures, i.e. patterns that are local but not stationary, such as rusting over an iron statue with appearance conditioned on varying moisture levels.\n We present inverse texture synthesis to address this issue. Our inverse synthesis runs in the opposite direction with respect to traditional forward synthesis: given a large globally varying texture, our algorithm automatically produces a small texture compaction that best summarizes the original. This small compaction can be used to reconstruct the original texture or to re-synthesize new textures under user-supplied controls. More important, our technique allows real-time synthesis of globally varying textures on a GPU, where the texture memory is usually too small for large textures. We propose an optimization framework for inverse texture synthesis, ensuring that each input region is properly encoded in the output compaction. Our optimization process also automatically computes orientation fields for anisotropic textures containing both low- and high-frequency regions, a situation difficult to handle via existing techniques.",
"title": ""
},
{
"docid": "30059bf751594a9b913057cabb69ca00",
"text": "This paper proposes a new algorithm for automatic crack detection from 2D pavement images. It strongly relies on the localization of minimal paths within each image, a path being a series of neighboring pixels and its score being the sum of their intensities. The originality of the approach stems from the proposed way to select a set of minimal paths and the two postprocessing steps introduced to improve the quality of the detection. Such an approach is a natural way to take account of both the photometric and geometric characteristics of pavement images. An intensive validation is performed on both synthetic and real images (from five different acquisition systems), with comparisons to five existing methods. The proposed algorithm provides very robust and precise results in a wide range of situations, in a fully unsupervised manner, which is beyond the current state of the art.",
"title": ""
},
{
"docid": "0fa223f3e555cbea206640de7f699cf8",
"text": "Transforming unstructured text into structured form is important for fashion e-commerce platforms that ingest tens of thousands of fashion products every day. While most of the e-commerce product extraction research focuses on extracting a single product from the product title using known keywords, little attention has been paid to discovering potentially multiple products present in the listing along with their respective relevant attributes, and leveraging the entire title and description text for this purpose. We fill this gap and propose a novel composition of sequence labeling and multi-task learning as an end-to-end trainable deep neural architecture. We systematically evaluate our approach on one of the largest tagged datasets in fashion e-commerce consisting of 25K listings labeled at word-level. Given 23 labels, we discover label-values with F1 score of 92.2%. When applied to 2M listings, we discovered 2.6M fashion items and 9.5M attribute values.",
"title": ""
},
{
"docid": "999c7d8d16817d4b991e5b794be3b074",
"text": "Smile detection from facial images is a specialized task in facial expression analysis with many potential applications such as smiling payment, patient monitoring and photo selection. The current methods on this study are to represent face with low-level features, followed by a strong classifier. However, these manual features cannot well discover information implied in facial images for smile detection. In this paper, we propose to extract high-level features by a well-designed deep convolutional networks (CNN). A key contribution of this work is that we use both recognition and verification signals as supervision to learn expression features, which is helpful to reduce same-expression variations and enlarge different-expression differences. Our method is end-to-end, without complex pre-processing often used in traditional methods. High-level features are taken from the last hidden layer neuron activations of deep CNN, and fed into a soft-max classifier to estimate. Experimental results show that our proposed method is very effective, which outperforms the state-of-the-art methods. On the GENKI smile detection dataset, our method reduces the error rate by 21% compared with the previous best method.",
"title": ""
},
{
"docid": "51b201422fdf2a9666070abadc6849cf",
"text": "Losing a parent prior to age 18 years can have life-long implications. The challenges of emerging adulthood may be even more difficult for parentally bereaved college students, and studying their coping responses is crucial for designing campus services and therapy interventions. This study examined the relationships between bereavement-related distress, experiential avoidance (EA), values, and resilience. Findings indicated that EA and low importance of values were correlated with bereavement difficulties, with EA accounting for 26% of the variance in the bereavement distress measure. In addition, reports of behaving consistently with values accounted for 20% of the variance in the resiliency measure. Contrary to hypotheses and previous literature, there were no significant relationships between the measures of EA and values. The results, limitations, and directions for future research are discussed.",
"title": ""
},
{
"docid": "1aa3d2456e34c8ab59a340fd32825703",
"text": "It is well known that guided soft tissue healing with a provisional restoration is essential to obtain optimal anterior esthetics in the implant prosthesis. What is not well known is how to transfer a record of beautiful anatomically healed tissue to the laboratory. With the advent of emergence profile healing abutments and corresponding impression copings, there has been a dramatic improvement over the original 4.0-mm diameter design. This is a great improvement, however, it still does not accurately transfer a record of anatomically healed tissue, which is often triangularly shaped, to the laboratory, because the impression coping is a round cylinder. This article explains how to fabricate a \"custom impression coping\" that is an exact record of anatomically healed tissue for accurate duplication. This technique is significant because it allows an even closer replication of the natural dentition.",
"title": ""
},
{
"docid": "f60f75d03c06842efcb2454536ec8226",
"text": "The Internet of Things (IoT) relies on physical objects interconnected between each others, creating a mesh of devices producing information. In this context, sensors are surrounding our environment (e.g., cars, buildings, smartphones) and continuously collect data about our living environment. Thus, the IoT is a prototypical example of Big Data. The contribution of this paper is to define a software architecture supporting the collection of sensor-based data in the context of the IoT. The architecture goes from the physical dimension of sensors to the storage of data in a cloud-based system. It supports Big Data research effort as its instantiation supports a user while collecting data from the IoT for experimental or production purposes. The results are instantiated and validated on a project named SMARTCAMPUS, which aims to equip the SophiaTech campus with sensors to build innovative applications that supports end-users.",
"title": ""
},
{
"docid": "8c4e02333f466c074ad332d904f655b9",
"text": "Context. The global communication system is in a tremendous growth, leading to wide range of data generation. The Telecom operators in various Telecom Industries, that generate large amount of data has a need to manage these data efficiently. As the technology involved in the database management systems is increasing, there is a remarkable growth of NoSQL databases in the 20 century. Apache Cassandra is an advanced NoSQL database system, which is popular for handling semi-structured and unstructured format of Big Data. Cassandra has an effective way of compressing data by using different compaction strategies. This research is focused on analyzing the performances of different compaction strategies in different use cases for default Cassandra stress model. The analysis can suggest better usage of compaction strategies in Cassandra, for a write heavy workload. Objectives. In this study, we investigate the appropriate performance metrics to evaluate the performance of compaction strategies. We provide the detailed analysis of Size Tiered Compaction Strategy, Date Tiered Compaction Strategy, and Leveled Compaction Strategy for a write heavy (90/10) work load, using default cassandra stress tool. Methods. A detailed literature research has been conducted to study the NoSQL databases, and the working of different compaction strategies in Apache Cassandra. The performances metrics are considered by the understanding of the literature research conducted, and considering the opinions of supervisors and Ericsson’s Apache Cassandra team. Two different tools were developed for collecting the performances of the considered metrics. The first tool was developed using Jython scripting language to collect the cassandra metrics, and the second tool was developed using python scripting language to collect the Operating System metrics. The graphs have been generated in Microsoft Excel, using the values obtained from the scripts. Results. Date Tiered Compaction Strategy and Size Tiered Compaction strategy showed more or less similar behaviour during the stress tests conducted. Level Tiered Compaction strategy has showed some remarkable results that effected the system performance, as compared to date tiered compaction and size tiered compaction strategies. Date tiered compaction strategy does not perform well for default cassandra stress model. Size tiered compaction can be preferred for default cassandra stress model, but not considerable for big data. Conclusions. With a detailed analysis and logical comparison of metrics, we finally conclude that Level Tiered Compaction Strategy performs better for a write heavy (90/10) workload while using default cassandra stress model, as compared to size tiered compaction and date tiered compaction strategies.",
"title": ""
},
{
"docid": "12b8dac3e97181eb8ca9c0406f2fa456",
"text": "INTRODUCTION\nThis paper discusses some of the issues and challenges of implementing appropriate and coordinated District Health Management Information System (DHMIS) in environments dependent on external support especially when insufficient attention has been given to the sustainability of systems. It also discusses fundamental issues which affect the usability of DHMIS to support District Health System (DHS), including meeting user needs and user education in the use of information for management; and the need for integration of data from all health-providing and related organizations in the district.\n\n\nMETHODS\nThis descriptive cross-sectional study was carried out in three DHSs in Kenya. Data was collected through use of questionnaires, focus group discussions and review of relevant literature, reports and operational manuals of the studied DHMISs.\n\n\nRESULTS\nKey personnel at the DHS level were not involved in the development and implementation of the established systems. The DHMISs were fragmented to the extent that their information products were bypassing the very levels they were created to serve. None of the DHMISs was computerized. Key resources for DHMIS operation were inadequate. The adequacy of personnel was 47%, working space 40%, storage space 34%, stationery 20%, 73% of DHMIS staff were not trained, management support was 13%. Information produced was 30% accurate, 19% complete, 26% timely, 72% relevant; the level of confidentiality and use of information at the point of collection stood at 32% and 22% respectively and information security at 48%. Basic DHMIS equipment for information processing was not available. This inhibited effective and efficient provision of information services.\n\n\nCONCLUSIONS\nAn effective DHMIS is essential for DHS planning, implementation, monitoring and evaluation activities. Without accurate, timely, relevant and complete information the existing information systems are not capable of facilitating the DHS managers in their day-today operational management. The existing DHMISs were found not supportive of the DHS managers' strategic and operational management functions. Consequently DHMISs were found to be plagued by numerous designs, operational, resources and managerial problems. There is an urgent need to explore the possibilities of computerizing the existing manual systems to take advantage of the potential uses of microcomputers for DHMIS operations within the DHS. Information system designers must also address issues of cooperative partnership in information activities, systems compatibility and sustainability.",
"title": ""
},
{
"docid": "37af8daa32affcdedb0b4820651a0b62",
"text": "Bag of words (BoW) model, which was originally used for document processing field, has been introduced to computer vision field recently and used in object recognition successfully. However, in face recognition, the order less collection of local patches in BoW model cannot provide strong distinctive information since the objects (face images) belong to the same category. A new framework for extracting facial features based on BoW model is proposed in this paper, which can maintain holistic spatial information. Experimental results show that the improved method can obtain better face recognition performance on face images of AR database with extreme expressions, variant illuminations, and partial occlusions.",
"title": ""
},
{
"docid": "20c57c17bd2db03d017b0f3fa8e2eb23",
"text": "Recent research shows that the i-vector framework for speaker recognition can significantly benefit from phonetic information. A common approach is to use a deep neural network (DNN) trained for automatic speech recognition to generate a universal background model (UBM). Studies in this area have been done in relatively clean conditions. However, strong background noise is known to severely reduce speaker recognition performance. This study investigates a phonetically-aware i-vector system in noisy conditions. We propose a front-end to tackle the noise problem by performing speech separation and examine its performance for both verification and identification tasks. The proposed separation system trains a DNN to estimate the ideal ratio mask of the noisy speech. The separated speech is then used to extract enhanced features for the i-vector framework. We compare the proposed system against a multi-condition trained baseline and a traditional GMM-UBM i-vector system. Our proposed system provides an absolute average improvement of 8% in identification accuracy and 1.2% in equal error rate.",
"title": ""
},
{
"docid": "53595cdb8e7a9e8ee2debf4e0dda6d45",
"text": "Botnets have become one of the major attacks in the internet today due to their illicit profitable financial gain. Meanwhile, honeypots have been successfully deployed in many computer security defence systems. Since honeypots set up by security defenders can attract botnet compromises and become spies in exposing botnet membership and botnet attacker behaviours, they are widely used by security defenders in botnet defence. Therefore, attackers constructing and maintaining botnets will be forced to find ways to avoid honeypot traps. In this paper, we present a hardware and software independent honeypot detection methodology based on the following assumption: security professionals deploying honeypots have a liability constraint such that they cannot allow their honeypots to participate in real attacks that could cause damage to others, while attackers do not need to follow this constraint. Attackers could detect honeypots in their botnets by checking whether compromised machines in a botnet can successfully send out unmodified malicious traffic. Based on this basic detection principle, we present honeypot detection techniques to be used in both centralised botnets and Peer-to-Peer (P2P) structured botnets. Experiments show that current standard honeypots and honeynet programs are vulnerable to the proposed honeypot detection techniques. At the end, we discuss some guidelines for defending against general honeypot-aware attacks.",
"title": ""
},
{
"docid": "e81a1fd47bd1ec7f4ffbd646f9873836",
"text": "Due to the increasing complexity of the processor architecture and the time-consuming software simulation, efficient design space exploration (DSE) has become a critical challenge in processor design. To address this challenge, recently machine learning techniques have been widely explored for predicting the performance of various configurations through conducting only a small number of simulations as the training samples. However, most existing methods randomly select some samples for simulation from the entire configuration space as training samples to build program-specific predictors. When a new program is considered, a large number of new program-specific simulations are needed for building a new predictor. Thus considerable simulation cost is required for each program. In this paper, we propose an efficient cross-program DSE framework TrEE by combining a flexible statistical sampling strategy and ensemble transfer learning technique. Specifically, TrEE includes the following two phases which also form our major contributions: 1) proposing an orthogonal array based foldover design for flexibly sampling the representative configurations for simulation, and 2) proposing an ensemble transfer learning algorithm that can effectively transfer knowledge among different types of programs for improving the prediction performance for the new program. We evaluate the proposed TrEE on the benchmarks of SPEC CPU 2006 suite. The results demonstrate that TrEE is much more efficient and robust than state-of-art DSE techniques.",
"title": ""
},
{
"docid": "205ed1eba187918ac6b4a98da863a6f2",
"text": "Since the first papers on asymptotic waveform evaluation (AWE), Pade-based reduced order models have become standard for improving coupled circuit-interconnect simulation efficiency. Such models can be accurately computed using bi-orthogonalization algorithms like Pade via Lanczos (PVL), but the resulting Pade approximates can still be unstable even when generated from stable RLC circuits. For certain classes of RC circuits it has been shown that congruence transforms, like the Arnoldi algorithm, can generate guaranteed stable and passive reduced-order models. In this paper we present a computationally efficient model-order reduction technique, the coordinate-transformed Arnoldi algorithm, and show that this method generates arbitrarily accurate and guaranteed stable reduced-order models for RLC circuits. Examples are presented which demonstrates the enhanced stability and efficiency of the new method.",
"title": ""
},
{
"docid": "49568236b0e221053c32b73b896d3dde",
"text": "The continuous growth in the size and use of the Internet is creating difficulties in the search for information. A sophisticated method to organize the layout of the information and assist user navigation is therefore particularly important. In this paper, we evaluate the feasibility of using a self-organizing map (SOM) to mine web log data and provide a visual tool to assist user navigation. We have developed LOGSOM, a system that utilizes Kohonen’s self-organizing map to organize web pages into a two-dimensional map. The organization of the web pages is based solely on the users’ navigation behavior, rather than the content of the web pages. The resulting map not only provides a meaningful navigation tool (for web users) that is easily incorporated with web browsers, but also serves as a visual analysis tool for webmasters to better understand the characteristics and navigation behaviors of web users visiting their pages. D 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "11ae42bedc18dedd0c29004000a4ec00",
"text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.",
"title": ""
},
{
"docid": "a903f9eb225a79ebe963d1905af6d3c8",
"text": "We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a \"bag,\" in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores.\n Since PBFS employs a nonconstant-time \"reducer\" -- \"hyperobject\" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P << (V+E)/Dlg3(V/D).",
"title": ""
},
{
"docid": "5f8a8117ff153528518713d66c876228",
"text": "Certain human talents, such as musical ability, have been associated with left-right differences in brain structure and function. In vivo magnetic resonance morphometry of the brain in musicians was used to measure the anatomical asymmetry of the planum temporale, a brain area containing auditory association cortex and previously shown to be a marker of structural and functional asymmetry. Musicians with perfect pitch revealed stronger leftward planum temporale asymmetry than nonmusicians or musicians without perfect pitch. The results indicate that outstanding musical ability is associated with increased leftward asymmetry of cortex subserving music-related functions.",
"title": ""
},
{
"docid": "8f876cfb665a4a6a0fc08c8d28584a14",
"text": "Personalisation is an important area in the field of IR that attempts to adapt ranking algorithms so that the results returned are tuned towards the searcher's interests. In this work we use query logs to build personalised ranking models in which user profiles are constructed based on the representation of clicked documents over a topic space. Instead of employing a human-generated ontology, we use novel latent topic models to determine these topics. Our experiments show that by subtly introducing user profiles as part of the ranking algorithm, rather than by re-ranking an existing list, we can provide personalised ranked lists of documents which improve significantly over a non-personalised baseline. Further examination shows that the performance of the personalised system is particularly good in cases where prior knowledge of the search query is limited.",
"title": ""
}
] |
scidocsrr
|
6cffefb378f6439dba7c1228059ef497
|
A Comparison of Sequence-to-Sequence Models for Speech Recognition
|
[
{
"docid": "afee419227629f8044b5eb0addd65ce3",
"text": "Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) have shown improvements over Deep Neural Networks (DNNs) across a wide variety of speech recognition tasks. CNNs, LSTMs and DNNs are complementary in their modeling capabilities, as CNNs are good at reducing frequency variations, LSTMs are good at temporal modeling, and DNNs are appropriate for mapping features to a more separable space. In this paper, we take advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture. We explore the proposed architecture, which we call CLDNN, on a variety of large vocabulary tasks, varying from 200 to 2,000 hours. We find that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.",
"title": ""
},
{
"docid": "e77c7b9c486f895167c54b6724e9e3c8",
"text": "Many machine learning tasks can be expressed as the transformation—or transduction—of input sequences into output sequences: speech recognition, machine translation, protein secondary structure prediction and text-to-speech to name but a few. One of the key challenges in sequence transduction is learning to represent both the input and output sequences in a way that is invariant to sequential distortions such as shrinking, stretching and translating. Recurrent neural networks (RNNs) are a powerful sequence learning architecture that has proven capable of learning such representations. However RNNs traditionally require a pre-defined alignment between the input and output sequences to perform transduction. This is a severe limitation since finding the alignment is the most difficult aspect of many sequence transduction problems. Indeed, even determining the length of the output sequence is often challenging. This paper introduces an end-to-end, probabilistic sequence transduction system, based entirely on RNNs, that returns a distribution over output sequences of all possible lengths and alignments for any input sequence. Experimental results are provided on the TIMIT speech corpus.",
"title": ""
},
{
"docid": "e73060d189e9a4f4fd7b93e1cab22955",
"text": "We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained phone models initialized with connectionist temporal classification (CTC). In this paper, we present techniques that further improve performance of LSTM RNN acoustic models for large vocabulary speech recognition. We show that frame stacking and reduced frame rate lead to more accurate models and faster decoding. CD phone modeling leads to further improvements. We also present initial results for LSTM RNN models outputting words directly.",
"title": ""
}
] |
[
{
"docid": "a5e23ca50545378ef32ed866b97fd418",
"text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.",
"title": ""
},
{
"docid": "477e5be6b2727a5d6f0a976c4c64c960",
"text": "Glaucoma is the second leading cause of blindness all over the world, with approximately 60 million cases reported worldwide in 2010. If undiagnosed in time, glaucoma causes irreversible damage to the optic nerve leading to blindness. The optic nerve head examination, which involves measurement of cup-todisc ratio, is considered one of the most valuable methods of structural diagnosis of the disease. Estimation of cup-to-disc ratio requires segmentation of optic disc and optic cup on eye fundus images and can be performed by modern computer vision algorithms. This work presents universal approach for automatic optic disc and cup segmentation, which is based on deep learning, namely, modification of U-Net convolutional neural network. Our experiments include comparison with the best known methods on publicly available databases DRIONS-DB, RIM-ONE v.3, DRISHTI-GS. For both optic disc and cup segmentation, our method achieves quality comparable to current state-of-the-art methods, outperforming them in terms of the prediction time.",
"title": ""
},
{
"docid": "f35a1201362e22bae2ff377da9f2c122",
"text": "We examined the impact of repeated testing and repeated studying on long-term learning. In Experiment 1, we replicated Karpicke and Roediger's (2008) influential results showing that once information can be recalled, repeated testing on that information enhances learning, whereas restudying that information does not. We then examined whether the apparent ineffectiveness of restudying might be attributable to the spacing differences between items that were inherent in the between-subjects design employed by Karpicke and Roediger. When we controlled for these spacing differences by manipulating the various learning conditions within subjects in Experiment 2, we found that both repeated testing and restudying improved learning, and that learners' awareness of the relative mnemonic benefits of these strategies was enhanced. These findings contribute to understanding how two important factors in learning-test-induced retrieval processes and spacing-can interact, and they illustrate that such interactions can play out differently in between-subjects and within-subjects experimental designs.",
"title": ""
},
{
"docid": "07f7a4fe69f6c4a1180cc3ca444a363a",
"text": "With the popularization of IoT (Internet of Things) devices and the continuous development of machine learning algorithms, learning-based IoT malicious traffic detection technologies have gradually matured. However, learning-based IoT traffic detection models are usually very vulnerable to adversarial samples. There is a great need for an automated testing framework to help security analysts to detect errors in learning-based IoT traffic detection systems. At present, most methods for generating adversarial samples require training parameters of known models and are only applicable to image data. To address the challenge, we propose a testing framework for learning-based IoT traffic detection systems, TLTD. By introducing genetic algorithms and some technical improvements, TLTD can generate adversarial samples for IoT traffic detection systems and can perform a black-box test on the systems.",
"title": ""
},
{
"docid": "12819e1ad6ca9b546e39ed286fe54d23",
"text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.",
"title": ""
},
{
"docid": "58d64f5c8c9d953b3c2df0a029eab864",
"text": "We adapt the greedy Stack-LSTM dependency parser of Dyer et al. (2015) to support a training-with-exploration procedure using dynamic oracles (Goldberg and Nivre, 2013) instead of cross-entropy minimization. This form of training, which accounts for model predictions at training time rather than assuming an error-free action history, improves parsing accuracies for both English and Chinese, obtaining very strong results for both languages. We discuss some modifications needed in order to get training with exploration to work well for a probabilistic neural-network dependency parser.",
"title": ""
},
{
"docid": "9152c55c35305bcaf56bc586e87f1575",
"text": "Information practices that use personal, financial, and health-related information are governed by US laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must properly be aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These \"rules\" are often precursors to software requirements that must undergo considerable refinement and analysis before they become implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology for directly extracting access rights and obligations from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the US Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.",
"title": ""
},
{
"docid": "ca5f251364ddf21e4cecf25cda5b575d",
"text": "This paper discusses \"bioink\", bioprintable materials used in three dimensional (3D) bioprinting processes, where cells and other biologics are deposited in a spatially controlled pattern to fabricate living tissues and organs. It presents the first comprehensive review of existing bioink types including hydrogels, cell aggregates, microcarriers and decellularized matrix components used in extrusion-, droplet- and laser-based bioprinting processes. A detailed comparison of these bioink materials is conducted in terms of supporting bioprinting modalities and bioprintability, cell viability and proliferation, biomimicry, resolution, affordability, scalability, practicality, mechanical and structural integrity, bioprinting and post-bioprinting maturation times, tissue fusion and formation post-implantation, degradation characteristics, commercial availability, immune-compatibility, and application areas. The paper then discusses current limitations of bioink materials and presents the future prospects to the reader.",
"title": ""
},
{
"docid": "dd45f296e623857262bd65e5d3843f33",
"text": "In their original versions, nature-inspired search algorithms such as evolutionary algorithms and those based on swarm intelligence, lack a mechanism to deal with the constraints of a numerical optimization problem. Nowadays, however, there exists a considerable amount of research devoted to design techniques for handling constraints within a nature-inspired algorithm. This paper presents an analysis of the most relevant types of constraint-handling techniques that have been adopted with nature-inspired algorithms. From them, the most popular approaches are analyzed in more detail. For each of them, some representative instantiations are further discussed. In the last part of the paper, some of the future trends in the area, which have been only scarcely explored, are briefly discussed and then the conclusions of this paper are presented. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5e4660c0f9e5144a496de13b0f7c35b3",
"text": "Deep learning techniques have achieved success in aspect-based sentiment analysis in recent years. However, there are two important issues that still remain to be further studied, i.e., 1) how to efficiently represent the target especially when the target contains multiple words; 2) how to utilize the interaction between target and left/right contexts to capture the most important words in them. In this paper, we propose an approach, called left-centerright separated neural network with rotatory attention (LCR-Rot), to better address the two problems. Our approach has two characteristics: 1) it has three separated LSTMs, i.e., left, center and right LSTMs, corresponding to three parts of a review (left context, target phrase and right context); 2) it has a rotatory attention mechanism which models the relation between target and left/right contexts. The target2context attention is used to capture the most indicative sentiment words in left/right contexts. Subsequently, the context2target attention is used to capture the most important word in the target. This leads to a two-side representation of the target: left-aware target and right-aware target. We compare our approach on three benchmark datasets with ten related methods proposed recently. The results show that our approach significantly outperforms the state-of-the-art techniques.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "9f04f8b2adc1c3afe23f8c2202528734",
"text": "Fluorodeoxyglucose positron emission tomography (FDG-PET) imaging based 3D topographic brain glucose metabolism patterns from normal controls (NC) and individuals with dementia of Alzheimer's type (DAT) are used to train a novel multi-scale ensemble classification model. This ensemble model outputs a FDG-PET DAT score (FPDS) between 0 and 1 denoting the probability of a subject to be clinically diagnosed with DAT based on their metabolism profile. A novel 7 group image stratification scheme is devised that groups images not only based on their associated clinical diagnosis but also on past and future trajectories of the clinical diagnoses, yielding a more continuous representation of the different stages of DAT spectrum that mimics a real-world clinical setting. The potential for using FPDS as a DAT biomarker was validated on a large number of FDG-PET images (N=2984) obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database taken across the proposed stratification, and a good classification AUC (area under the curve) of 0.78 was achieved in distinguishing between images belonging to subjects on a DAT trajectory and those images taken from subjects not progressing to a DAT diagnosis. Further, the FPDS biomarker achieved state-of-the-art performance on the mild cognitive impairment (MCI) to DAT conversion prediction task with an AUC of 0.81, 0.80, 0.77 for the 2, 3, 5 years to conversion windows respectively.",
"title": ""
},
{
"docid": "228a777c356591c4d1944e645c04a106",
"text": "Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there is a lack of practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks.",
"title": ""
},
{
"docid": "bda2c57a02275e0533f83da1ad46b573",
"text": "In this thesis, we propose a new, scalable probabilistic logic called ProPPR to combine the best of the symbolic and statistical worlds. ProPPR has the rich semantic representation of Prolog, but we associate a feature vector to each clause, such that each clause has a weight vector that can be learned from the training data. Instead of searching over the entire graph for solutions, ProPPR uses a provably-correct approximate personalized PageRank to construct a subgraph for local grounding: the inference time is now independent of the size of the KB. We show that ProPPR can be viewed as a recursive extension to the path ranking algorithm (PRA), and outperforms PRA in the inference task with one million facts from NELL.",
"title": ""
},
{
"docid": "08d59866cf8496573707d46a6cb520d4",
"text": "Healthcare is an integral component in people's lives, especially for the rising elderly population. Medicare is one such healthcare program that provides for the needs of the elderly. It is imperative that these healthcare programs are affordable, but this is not always the case. Out of the many possible factors for the rising cost of healthcare, claims fraud is a major contributor, but its impact can be lessened through effective fraud detection. We propose a general outlier detection model, based on Bayesian inference, using probabilistic programming. Our model provides probability distributions rather than just point values, as with most common outlier detection methods. Credible intervals are also generated to further enhance confidence that the detected outliers should in fact be considered outliers. Two case studies are presented demonstrating our model's effectiveness in detecting outliers. The first case study uses temperature data in order to provide a clear comparison of several outlier detection techniques. The second case study uses a Medicare dataset to showcase our proposed outlier detection model. Our results show that the successful detection of outliers, which indicate possible fraudulent activities, can provide effective and meaningful results for further investigation within medical specialties or by using real-world, medical provider fraud investigation cases.",
"title": ""
},
{
"docid": "14b06c786127363d5bdaee4602b15a42",
"text": "Instant messaging applications continue to grow in popularity as a means of communicating and sharing multimedia files. The information contained within these applications can prove invaluable to law enforcement in the investigation of crimes. Kik messenger is a recently introduced instant messaging application that has become very popular in a short period of time, especially among young users. The novelty of Kik means that there has been little forensic examination conducted on this application. This study addresses this issue by investigating Kik messenger on Apple iOS devices. The goal was to locate and document artefacts created or modified by Kik messenger on devices installed with the latest version of iOS, as well as in iTunes backup files. Once achieved, the secondary goal was to analyse the artefacts to decode and interpret their meaning and by doing so, be able to answer the typical questions faced by forensic investigators. A detailed description of artefacts created or modified by Kik messenger is provided. Results from experiments showed that deleted images are not only recoverable from the device, but can also be located and downloaded from Kik servers. A process to link data from multiple database tables producing accurate chat histories is explained. These outcomes can be used by law enforcement to investigate crimes and by software developers to create tools to recover evidence. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d6959f0cd5ad7a534e99e3df5fa86135",
"text": "In the course of the project Virtual Try-On new VR technologies have been developed, which form the basis for a realistic, three dimensional, (real-time) simulation and visualization of individualized garments put on by virtual counterparts of real customers. To provide this cloning and dressing of people in VR, a complete process chain is being build up starting with the touchless 3-dimensional scanning of the human body up to a photo-realistic 3-dimensional presentation of the virtual customer dressed in the chosen pieces of clothing. The emerging platform for interactive selection and configuration of virtual garments, the „virtual shop“, will be accessible in real fashion boutiques as well as over the internet, thereby supplementing the conventional distribution channels.",
"title": ""
},
{
"docid": "56a8e1384f363adbf116bbb09b01f6f6",
"text": "IMPORTANCE\nMany valuable classification schemes for saddle nose have been suggested that integrate clinical deformity and treatment; however, there is no consensus regarding the most suitable classification and surgical method for saddle nose correction.\n\n\nOBJECTIVES\nTo present clinical characteristics and treatment outcome of saddle nose deformity and to propose a modified classification system to better characterize the variety of different saddle nose deformities.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThe retrospective study included 91 patients who underwent rhinoplasty for correction of saddle nose from April 1, 2003, through December 31, 2011, with a minimum follow-up of 8 months. Saddle nose was classified into 4 types according to a modified classification.\n\n\nMAIN OUTCOME AND MEASURE\nAesthetic outcomes were classified as excellent, good, fair, or poor.\n\n\nRESULTS\nPatients underwent minor cosmetic concealment by dorsal augmentation (n = 8) or major septal reconstruction combined with dorsal augmentation (n = 83). Autologous costal cartilages were used in 40 patients (44%), and homologous costal cartilages were used in 5 patients (6%). According to postoperative assessment, 29 patients had excellent, 42 patients had good, 18 patients had fair, and 2 patients had poor aesthetic outcomes. No statistical difference in surgical outcome according to saddle nose classification was observed. Eight patients underwent revision rhinoplasty, owing to recurrence of saddle, wound infection, or warping of the costal cartilage for dorsal augmentation.\n\n\nCONCLUSIONS\nWe introduce a modified saddle nose classification scheme that is simpler and better able to characterize different deformities. Among 91 patients with saddle nose, 20 (22%) had unsuccessful outcomes (fair or poor) and 8 (9%) underwent subsequent revision rhinoplasty. Thus, management of saddle nose deformities remains challenging.\n\n\nLEVEL OF EVIDENCE\n4.",
"title": ""
},
{
"docid": "0baf2c97da07f954a76b81f840ccca9e",
"text": "3 Chapter 1 Introduction 1.1 Background: Identification is an action of recognizing or being recognized, in particular, identification of a thing or person from previous exposures or information. Identification these days is quite necessary as for security purposes. It can be done using biometric parameters such as finger prints, I.D scan, face recognition etc. Most probably the first well known example of a facial recognition system is because of Kohonen, who signified that an uncomplicated neural network could execute face recognition for aligned and normalized face images. The sort of network he recruited was by computing a face illustration by estimating the eigenvectors of the face image's autocorrelation pattern; these eigenvectors are currently called as`Eigen faces. But Kohonen's approach was not a real time triumph due to the need for accurate alignment and normalization. In successive years a great number of researchers attempted facial recognition systems based on edges, inter-feature spaces, and various neural network techniques. While many were victorious using small scale databases of aligned samples, but no one significantly directed the alternative practical problem of vast databases where the position and scale of the face was not known. An image is supposed to be outcome of two real variables, defined in the \" real world \" , for example, a(x, y) where 'a' is the amplitude in terms of brightness of the image at the real coordinate position (x, y). It is now practicable to operate multi-dimensional signals with systems that vary from simple digital circuits to complicated circuits, due to modern technology. Image Analysis (input image->computation out) Image Understanding (input image-> high-level interpretation out) 4 In this age of science and technology, images also attain wider opportunity due to the rapidly increasing significance of scientific visualization, for example microarray data in genetic research. To process the image firstly it is transformed into a digital form. Digitization comprises of sampling of image and quantization of sampled values. After transformed into a digital form, processing is performed. It introduces focal attention on image, or improvement of image features such as boundaries, or variation that make a graphic display more effective for representation & study. This technique does not enlarge the intrinsic information content in data. This technique is used to remove the unwanted observed image to reduce the effect of mortifications. Scope and precision of the knowledge of mortifications process and filter design are the basis of …",
"title": ""
},
{
"docid": "49cafb7a5a42b7a8f8260a398c390504",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
}
] |
scidocsrr
|
2e9f4e0a2346ce8a3dfdd07b2eee5cc2
|
EncFS goes multi-user: Adding access control to an encrypted file system
|
[
{
"docid": "21d84bd9ea7896892a3e69a707b03a6a",
"text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.",
"title": ""
}
] |
[
{
"docid": "9cb7f19b08aefb98a412a2737b73707a",
"text": "Usually device compact models do not include breakdown mechanisms which are fundamental for ESD protection devices. This work proposes a novel spice-compatible modeling of breakdown phenomena for ESD diodes. The developed physics based approach includes minority carriers propagation and can be embedded in the simulation of parasitic substrate noise of power devices. The model implemented in VerilogA has been validated with device simulations for a simple structure at different temperatures showing good agreement and robust convergence.",
"title": ""
},
{
"docid": "da6a74341c8b12658aea2a267b7a0389",
"text": "An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or \"harmless error.\" Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE",
"title": ""
},
{
"docid": "b670c8908aa2c8281b3164d7726b35d0",
"text": "We present a sketching interface for quickly and easily designing freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making wide areas fat, and narrow areas thin. Teddy, our prototype system, is implemented as a Java#8482; program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user typically masters the operations within 10 minutes, and can construct interesting 3D models within minutes.",
"title": ""
},
{
"docid": "1bb54da28e139390c2176ae244066575",
"text": "A novel non-parametric, multi-variate quickest detection method is proposed for cognitive radios (CRs) using both energy and cyclostationary features. The proposed approach can be used to track state dynamics of communication channels. This capability can be useful for both dynamic spectrum sharing (DSS) and future CRs, as in practice, centralized channel synchronization is unrealistic and the prior information of the statistics of channel usage is, in general, hard to obtain. The proposed multi-variate non-parametric average sample power and cyclostationarity-based quickest detection scheme is shown to achieve better performance compared to traditional energy-based schemes. We also develop a parallel on-line quickest detection/off-line change-point detection algorithm to achieve self-awareness of detection delays and false alarms for future automation. Compared to traditional energy-based quickest detection schemes, the proposed multi-variate non-parametric quickest detection scheme has comparable computational complexity. The simulated performance shows improvements in terms of small detection delays and significantly higher percentage of spectrum utilization.",
"title": ""
},
{
"docid": "ee8b20f685d4c025e1d113a676728359",
"text": "Two experiments were conducted to evaluate the effects of increasing concentrations of glycerol in concentrate diets on total tract digestibility, methane (CH4) emissions, growth, fatty acid profiles, and carcass traits of lambs. In both experiments, the control diet contained 57% barley grain, 14.5% wheat dried distillers grain with solubles (WDDGS), 13% sunflower hulls, 6.5% beet pulp, 6.3% alfalfa, and 3% mineral-vitamin mix. Increasing concentrations (7, 14, and 21% dietary DM) of glycerol in the dietary DM were replaced for barley grain. As glycerol was added, alfalfa meal and WDDGS were increased to maintain similar concentrations of CP and NDF among diets. In Exp.1, nutrient digestibility and CH4 emissions from 12 ram lambs were measured in a replicated 4 × 4 Latin square experiment. In Exp. 2, lamb performance was evaluated in 60 weaned lambs that were blocked by BW and randomly assigned to 1 of the 4 dietary treatments and fed to slaughter weight. In Exp. 1, nutrient digestibility and CH4 emissions were not altered (P = 0.15) by inclusion of glycerol in the diets. In Exp.2, increasing glycerol in the diet linearly decreased DMI (P < 0.01) and tended (P = 0.06) to reduce ADG, resulting in a linearly decreased final BW. Feed efficiency was not affected by glycerol inclusion in the diets. Carcass traits and total SFA or total MUFA proportions of subcutaneous fat were not affected (P = 0.77) by inclusion of glycerol, but PUFA were linearly decreased (P < 0.01). Proportions of 16:0, 10t-18:1, linoleic acid (18:2 n-6) and the n-6/n-3 ratio were linearly reduced (P < 0.01) and those of 18:0 (stearic acid), 9c-18:1 (oleic acid), linearly increased (P < 0.01) by glycerol. When included up to 21% of diet DM, glycerol did not affect nutrient digestibility or CH4 emissions of lambs fed barley based finishing diets. Glycerol may improve backfat fatty acid profiles by increasing 18:0 and 9c-18:1 and reducing 10t-18:1 and the n-6/n-3 ratio.",
"title": ""
},
{
"docid": "ce6e5532c49b02988588f2ac39724558",
"text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.",
"title": ""
},
{
"docid": "46ae0bea85996747e06c1c18bd606340",
"text": "In this paper, two accurate models for interdigital capacitors and shunt inductive stubs in coplanar-waveguide structures are presented and validated over the entire W-band frequency range. Using these models, a novel bandpass filter (BPF) and a miniaturized high-pass filter are designed and fabricated. By inserting interdigital capacitors in BPF resonators, an out-of-band transmission null is introduced, which improves rejection level up to 17 dB over standard designs of similar filters. A high-pass filter is also designed, using semilumped-element models in order to miniaturize the filter structure. It is shown that a fifth-order high-pass filter can be built with a maximum dimension of less than /spl lambda//sub g//3. Great agreement between simulated and measured responses of these filters is demonstrated.",
"title": ""
},
{
"docid": "8c0e5e48c8827a943f4586b8e75f4f9d",
"text": "Predicting the results of football matches poses an interesting challenge due to the fact that the sport is so popular and widespread. However, predicting the outcomes is also a difficult problem because of the number of factors which must be taken into account that cannot be quantitatively valued or modeled. As part of this work, a software solution has been developed in order to try and solve this problem. During the development of the system, a number of tests have been carried out in order to determine the optimal combination of features and classifiers. The results of the presented system show a satisfactory capability of prediction which is superior to the one of the reference method (most likely a priori outcome).",
"title": ""
},
{
"docid": "3a066a35f064d275d64770e28672b277",
"text": "This paper describes how to add first-class generic types---including mixins---to strongly-typed OO languages with nominal subtyping such as Java and C#. A generic type system is \"first-class\" if generic types can appear in any context where conventional types can appear. In this context, a mixin is simply a generic class that extends one of its type parameters, e.g., a class C<T> that extends T. Although mixins of this form are widely used in Cpp (via templates), they are clumsy and error-prone because Cpp treats mixins as macros, forcing each mixin instantiation to be separately compiled and type-checked. The abstraction embodied in a mixin is never separately analyzed.Our formulation of mixins using first-class genericity accommodates sound local (class-by-class) type checking. A mixin can be fully type-checked given symbol tables for each of the classes that it directly references---the same context in which Java performs incremental class compilation. To our knowledge, no previous formal analysis of first-class genericity in languages with nominal type systems has been conducted, which is surprising because nominal subtyping has become predominant in mainstream object-oriented programming languages.What makes our treatment of first-class genericity particularly interesting and important is the fact that it can be added to the existing Java language without any change to the underlying Java Virtual Machine. Moreover, the extension is backward compatible with legacy Java source and class files. Although our discussion of a practical implementation strategy focuses on Java, the same implementation techniques could be applied to other object-oriented languages such as C# or Eiffel that support incremental compilation, dynamic class loading, and nominal subtyping.",
"title": ""
},
{
"docid": "8c7af6b1aa36c5369c7e023dd84dabfd",
"text": "This paper compares various methodologies for the design of Sobel Edge Detection Algorithm on Field Programmable Gate Arrays (FPGAs). We show some characteristics to design a computer vision algorithm to suitable hardware platforms. We evaluate hardware resources and power consumption of Sobel Edge Detection on two studies: Xilinx system generator (XSG) and Vivado_HLS tools which both are very useful tools for developing computer vision algorithms. The comparison the hardware resources and power consumption among FPGA platforms (Zynq-7000 AP SoC, Spartan 3A DSP) are analyzed. The hardware resources by using Vivado_HLS on both platforms are used less 9 times with BRAM_18K, 7 times with DSP48E, 2 times with FFs, and approximately with LUTs comparing with XSG. In addition, the power consumption on Zynq-7000 AP SoC spends more 30% by using Vivado_HLS than by using XSG tool and for Spartan 3A DSP consumes a half of power comparing with by using XSG tool. In the study by using Vivado_HLS shows that power consumption depends on frequency.",
"title": ""
},
{
"docid": "510652008e21c97cb3a75fc921bf6cfc",
"text": "This study aims at extending our understanding regarding the adoption of mobile banking through integrating Technology Acceptance Model (TAM) and Theory of Planned Behavior (TPB). Analyzing survey data from 119 respondents yielded important findings that partially support research hypotheses. The results indicated a significant positive impact of attitude toward mobile banking and subjective norm on mobile banking adoption. Surprisingly, the effects of behavioral control and usefulness on mobile banking adoption were insignificant. Furthermore, the regression results indicated a significant impact of perceived usefulness on attitude toward mobile banking while the effect of perceived ease of use on attitude toward mobile banking was not supported. The paper concludes with a discussion of research results and draws several implications for future research.",
"title": ""
},
{
"docid": "511486e1b6e87efc1aeec646bb5af52b",
"text": "The present study examined the associations between pathological forms of narcissism and responses to scenarios describing private or public negative events. This was accomplished using a randomized twowave experimental design with 600 community participants. The grandiose form of pathological narcissism was associated with increased negative affect and less forgiveness for public offenses, whereas the vulnerable form of pathological narcissism was associated with increased negative affect following private negative events. Concerns about humiliation mediated the association of pathological narcissism with increased negative affect but not the association between grandiose narcissism and lack of forgiveness for public offenses. These findings suggest that pathological narcissism may promote maladaptive responses to negative events that occur in private (vulnerable narcissism) or public (gran-",
"title": ""
},
{
"docid": "55d92c6a46c491a5cc8d727536077c3c",
"text": "Given a collection of objects and an associated similarity measure, the all-pairs similarity search problem asks us to find all pairs of objects with similarity greater than a certain user-specified threshold. Locality-sensitive hashing (LSH) based methods have become a very popular approach for this problem. However, most such methods only use LSH for the first phase of similarity search i.e. efficient indexing for candidate generation. In this paper, we presentBayesLSH, a principled Bayesian algorithm for the subsequent phase of similarity search performing candidate pruning and similarity estimation using LSH. A simpler variant, BayesLSHLite, which calculates similarities exactly, is also presented. BayesLSH is able to quickly prune away a large majority of the false positive candidate pairs, leading to significant speedups over baseline approaches. For BayesLSH, we also provide probabilistic guarantees on the quality of the output, both in terms of accuracy and recall. Finally, the quality of BayesLSH’s output can be easily tuned and does not require any manual setting of the number of hashes to use for similarity estimation, unlike standard approaches. For two state-of-the-art candidate generation algorithms, AllPairs [3] and LSH, BayesLSH enables significant speedups, typically in the range 2x-20x for a wide variety of datasets.",
"title": ""
},
{
"docid": "01c267fbce494fcfabeabd38f18c19a3",
"text": "New insights in the programming physics of silicided polysilicon fuses integrated in 90 nm CMOS have led to a programming time of 100 ns, while achieving a resistance increase of 107. This is an order of magnitude better than any previously published result for the programming time and resistance increase individually. Simple calculations and TEM-analyses substantiate the proposed programming mechanism. The advantage of a rectangular fuse head over a tapered fuse head is shown and explained",
"title": ""
},
{
"docid": "3f5461231e7120be4fbddfd53c533a53",
"text": "OBJECTIVE\nTo develop and validate a general method (called regression risk analysis) to estimate adjusted risk measures from logistic and other nonlinear multiple regression models. We show how to estimate standard errors for these estimates. These measures could supplant various approximations (e.g., adjusted odds ratio [AOR]) that may diverge, especially when outcomes are common.\n\n\nSTUDY DESIGN\nRegression risk analysis estimates were compared with internal standards as well as with Mantel-Haenszel estimates, Poisson and log-binomial regressions, and a widely used (but flawed) equation to calculate adjusted risk ratios (ARR) from AOR.\n\n\nDATA COLLECTION\nData sets produced using Monte Carlo simulations.\n\n\nPRINCIPAL FINDINGS\nRegression risk analysis accurately estimates ARR and differences directly from multiple regression models, even when confounders are continuous, distributions are skewed, outcomes are common, and effect size is large. It is statistically sound and intuitive, and has properties favoring it over other methods in many cases.\n\n\nCONCLUSIONS\nRegression risk analysis should be the new standard for presenting findings from multiple regression analysis of dichotomous outcomes for cross-sectional, cohort, and population-based case-control studies, particularly when outcomes are common or effect size is large.",
"title": ""
},
{
"docid": "92fcc4d21872dca232c624a11eb3988c",
"text": "Most automobile manufacturers maintain many vehicle types to keep a successful position on the market. Through the further development all vehicle types gain a diverse amount of new functionality. Additional features have to be supported by the car’s software. For time efficient accomplishment, usually the existing electronic control unit (ECU) code is extended. In the majority of cases this evolutionary development process is accompanied by a constant decay of the software architecture. This effect known as software erosion leads to an increasing deviation from the requirements specifications. To counteract the erosion it is necessary to continuously restore the architecture in respect of the specification. Automobile manufacturers cope with the erosion of their ECU software with varying degree of success. Successfully we applied a methodical and structured approach of architecture restoration in the specific case of the brake servo unit (BSU). Software product lines from existing BSU variants were extracted by explicit projection of the architecture variability and decomposition of the original architecture. After initial application, this approach was capable to restore the BSU architecture recurrently.",
"title": ""
},
{
"docid": "e14801b902bad321870677c4a723ae2c",
"text": "We propose a framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label. This is achieved by modifying the eigenspectrum of the kernel matrix. Experimental results assess the validity of this approach.",
"title": ""
},
{
"docid": "77731bed6cf76970e851f3b2ce467c1b",
"text": "We introduce SparkGalaxy, a big data processing toolkit that is able to encode complex data science experiments as a set of high-level workflows. SparkGalaxy combines the Spark big data processing platform and the Galaxy workflow management system to o↵er a set of tools for graph processing and machine learning using a novel interaction model for creating and using complex workflows. SparkGalaxy contributes an easy-to-use interface and scalable algorithms for data science. We demonstrate SparkGalaxy use in large social network analysis and other case stud-",
"title": ""
},
{
"docid": "67ca2df3c7d660600298e517020fe974",
"text": "The recent trend to design more efficient and versatile ships has increased the variety in hybrid propulsion and power supply architectures. In order to improve performance with these architectures, intelligent control strategies are required, while mostly conventional control strategies are applied currently. First, this paper classifies ship propulsion topologies into mechanical, electrical and hybrid propulsion, and power supply topologies into combustion, electrochemical, stored and hybrid power supply. Then, we review developments in propulsion and power supply systems and their control strategies, to subsequently discuss opportunities and challenges for these systems and the associated control. We conclude that hybrid architectures with advanced control strategies can reduce fuel consumption and emissions up to 10–35%, while improving noise, maintainability, manoeuvrability and comfort. Subsequently, the paper summarises the benefits and drawbacks, and trends in application of propulsion and power supply technologies, and it reviews the applicability and benefits of promising advanced control strategies. Finally, the paper analyses which control strategies can improve performance of hybrid systems for future smart and autonomous ships and concludes that a combination of torque, angle of attack, and Model Predictive Control with dynamic settings could improve performance of future smart and more",
"title": ""
},
{
"docid": "091a37c8e07520154e3305bb79427f76",
"text": "Document classification presents difficult challenges due to the sparsity and the high dimensionality of text data, and to the complex semantics of the natural language. The traditional document representation is a word-based vector (Bag of Words, or BOW), where each dimension is associated with a term of the dictionary containing all the words that appear in the corpus. Although simple and commonly used, this representation has several limitations. It is essential to embed semantic information and conceptual patterns in order to enhance the prediction capabilities of classification algorithms. In this paper, we overcome the shortages of the BOW approach by embedding background knowledge derived from Wikipedia into a semantic kernel, which is then used to enrich the representation of documents. Our empirical evaluation with real data sets demonstrates that our approach successfully achieves improved classification accuracy with respect to the BOW technique, and to other recently developed methods.",
"title": ""
}
] |
scidocsrr
|
c1e1190b69745661acab613b09a58e77
|
The Gridfit algorithm: an efficient and effective approach to visualizing large amounts of spatial data
|
[
{
"docid": "6073601ab6d6e1dbba7a42c346a29436",
"text": "We present a new focus+Context (fisheye) technique for visualizing and manipulating large hierarchies. Our technique assigns more display space to a portion of the hierarchy while still embedding it in the context of the entire hierarchy. The essence of this scheme is to layout the hierarchy in a uniform way on a hyperbolic plane and map this plane onto a circular display region. This supports a smooth blending between focus and context, as well as continuous redirection of the focus. We have developed effective procedures for manipulating the focus using pointer clicks as well as interactive dragging, and for smoothly animating transitions across such manipulation. A laboratory experiment comparing the hyperbolic browser with a conventional hierarchy browser was conducted.",
"title": ""
}
] |
[
{
"docid": "743825cd8bf6df1f77049b827b004616",
"text": "The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.",
"title": ""
},
{
"docid": "20c38b308892442744628905cc5f6bd2",
"text": "In this paper, we report the results for the experiments we carried out to automatically extract \"problem solved concepts\" from a patent document. We introduce two approaches for finding important information in a patent document. The main focus of our work is to devise methods that can efficiently find the problems an invention solves, as this can help in searching for the prior art and can be used as a mechanism for relevance feedback. We have used software and business process patents to carry out our studies.",
"title": ""
},
{
"docid": "98e8a120c393ac669f03f86944c81068",
"text": "In this paper, we investigate deep neural networks for blind motion deblurring. Instead of regressing for the motion blur kernel and performing non-blind deblurring outside of the network (as most methods do), we propose a compact and elegant end-to-end deblurring network. Inspired by the data-driven sparse-coding approaches that are capable of capturing linear dependencies in data, we generalize this notion by embedding non-linearities into the learning process. We propose a new architecture for blind motion deblurring that consists of an autoencoder that learns the data prior, and an adversarial network that attempts to generate and discriminate between clean and blurred features. Once the network is trained, the generator learns a blur-invariant data representation which when fed through the decoder results in the final deblurred output.",
"title": ""
},
{
"docid": "3bc998aa2dd0a531cf2c449b7fe66996",
"text": "Peer-to-peer and other decentralized,distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack,a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system,the malicious user is able to \"out vote\" the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks.Our protocol is based on the \"social network \"among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately-small \"cut\" in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create.We show the effectiveness of SybilGuard both analytically and experimentally.",
"title": ""
},
{
"docid": "184c15d2c68ae91c372e74a6aec29582",
"text": "BACKGROUND\nSkilled attendance at childbirth is crucial for decreasing maternal and neonatal mortality, yet many women in low- and middle-income countries deliver outside of health facilities, without skilled help. The main conceptual framework in this field implicitly looks at home births with complications. We expand this to include \"preventive\" facility delivery for uncomplicated childbirth, and review the kinds of determinants studied in the literature, their hypothesized mechanisms of action and the typical findings, as well as methodological difficulties encountered.\n\n\nMETHODS\nWe searched PubMed and Ovid databases for reviews and ascertained relevant articles from these and other sources. Twenty determinants identified were grouped under four themes: (1) sociocultural factors, (2) perceived benefit/need of skilled attendance, (3) economic accessibility and (4) physical accessibility.\n\n\nRESULTS\nThere is ample evidence that higher maternal age, education and household wealth and lower parity increase use, as does urban residence. Facility use in the previous delivery and antenatal care use are also highly predictive of health facility use for the index delivery, though this may be due to confounding by service availability and other factors. Obstetric complications also increase use but are rarely studied. Quality of care is judged to be essential in qualitative studies but is not easily measured in surveys, or without linking facility records with women. Distance to health facilities decreases use, but is also difficult to determine. Challenges in comparing results between studies include differences in methods, context-specificity and the substantial overlap between complex variables.\n\n\nCONCLUSION\nStudies of the determinants of skilled attendance concentrate on sociocultural and economic accessibility variables and neglect variables of perceived benefit/need and physical accessibility. To draw valid conclusions, it is important to consider as many influential factors as possible in any analysis of delivery service use. The increasing availability of georeferenced data provides the opportunity to link health facility data with large-scale household data, enabling researchers to explore the influences of distance and service quality.",
"title": ""
},
{
"docid": "bf57a5fcf6db7a9b26090bd9a4b65784",
"text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.",
"title": ""
},
{
"docid": "337356e428bbe1c0275a87cd0290de82",
"text": "Finding a parking space in San Francisco City Area is really a headache issue. We try to find a reliable way to give parking information by prediction. We reveals the effect of aggregation on prediction for parking occupancy in San Francisco. Different empirical aggregation levels are tested with several prediction models. Moreover it proposes a sufficient condition leading to prediction error decreasing. Due to the aggregation effect, we would like to explore patterns inside parking. Thus daily occupancy profiles are also investigated to understand travelers behavior in the city.",
"title": ""
},
{
"docid": "ab66d7e267072432d1015e36260c9866",
"text": "Deep Neural Networks (DNNs) are the current state of the art for various tasks such as object detection, natural language processing and semantic segmentation. These networks are massively parallel, hierarchical models with each level of hierarchy performing millions of operations on a single input. The enormous amount of parallel computation makes these DNNs suitable for custom acceleration. Custom accelerators can provide real time inference of DNNs at low power thus enabling widespread embedded deployment. In this paper, we present Snowflake, a high efficiency, low power accelerator for DNNs. Snowflake was designed to achieve optimum occupancy at low bandwidths and it is agnostic to the network architecture. Snowflake was implemented on the Xilinx Zynq XC7Z045 APSoC and achieves a peak performance of 128 G-ops/s. Snowflake is able to maintain a throughput of 98 FPS on AlexNet while averaging 1.2 GB/s of memory bandwidth.",
"title": ""
},
{
"docid": "a7ca3ffcae09ad267281eb494532dc54",
"text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.",
"title": ""
},
{
"docid": "ae151d8ed9b8f99cfe22e593f381dd3b",
"text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.",
"title": ""
},
{
"docid": "6eb2c0e22ecc0816cb5f83292902d799",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
},
{
"docid": "e1ab544e1a00cc6b2f7797f65e084378",
"text": "This research investigates how to introduce synchronous interactive peer learning into an online setting appropriate both for crowdworkers (learning new tasks) and students in massive online courses (learning course material). We present an interaction framework in which groups of learners are formed on demand and then proceed through a sequence of activities that include synchronous group discussion about learner-generated responses. Via controlled experiments with crowdworkers, we show that discussing challenging problems leads to better outcomes than working individually, and incentivizing people to help one another yields still better results. We then show that providing a mini-lesson in which workers consider the principles underlying the tested concept and justify their answers leads to further improvements. Combining the mini-lesson with the discussion of the multiple-choice question leads to significant improvements on that question. We also find positive subjective responses to the peer interactions, suggesting that discussions can improve morale in remote work or learning settings.",
"title": ""
},
{
"docid": "9961f44d4ab7d0a344811186c9234f2c",
"text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.",
"title": ""
},
{
"docid": "0fb41d794c68c513f81d5396d3f05bf4",
"text": "Previous work on question-answering systems has mainly focused on answering individual questions, assuming they are independent and devoid of context. Instead, we investigate sequential question answering, in which multiple related questions are asked sequentially. We introduce a new dataset of fully humanauthored questions. We extend existing strong question answering frameworks to include information about previous asked questions to improve the overall question-answering accuracy in open-domain question answering. The dataset is publicly available at http:// sequential.qanta.org.",
"title": ""
},
{
"docid": "f4d9190ad9123ddcf809f47c71225162",
"text": "Please cite this article in press as: Tseng, M Industrial Engineering (2009), doi:10.1016/ Selection of appropriate suppliers in supply chain management strategy (SCMS) is a challenging issue because it requires battery of evaluation criteria/attributes, which are characterized with complexity, elusiveness, and uncertainty in nature. This paper proposes a novel hierarchical evaluation framework to assist the expert group to select the optimal supplier in SCMS. The rationales for the evaluation framework are based upon (i) multi-criteria decision making (MCDM) analysis that can select the most appropriate alternative from a finite set of alternatives with reference to multiple conflicting criteria, (ii) analytic network process (ANP) technique that can simultaneously take into account the relationships of feedback and dependence of criteria, and (iii) choquet integral—a non-additive fuzzy integral that can eliminate the interactivity of expert subjective judgment problems. A case PCB manufacturing firm is studied and the results indicated that the proposed evaluation framework is simple and reasonable to identify the primary criteria influencing the SCMS, and it is effective to determine the optimal supplier even with the interactive and interdependent criteria/attributes. This hierarchical evaluation framework provides a complete picture in SCMS contexts to both researchers and practitioners. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "52fc069497d79f97e3470f6a9f322151",
"text": "We show how eye-tracking corpora can be used to improve sentence compression models, presenting a novel multi-task learning algorithm based on multi-layer LSTMs. We obtain performance competitive with or better than state-of-the-art approaches.",
"title": ""
},
{
"docid": "65bc99201599ec17347d3fe0857cd39a",
"text": "Many children strive to attain excellence in sport. However, although talent identification and development programmes have gained popularity in recent decades, there remains a lack of consensus in relation to how talent should be defined or identified and there is no uniformly accepted theoretical framework to guide current practice. The success rates of talent identification and development programmes have rarely been assessed and the validity of the models applied remains highly debated. This article provides an overview of current knowledge in this area with special focus on problems associated with the identification of gifted adolescents. There is a growing agreement that traditional cross-sectional talent identification models are likely to exclude many, especially late maturing, 'promising' children from development programmes due to the dynamic and multidimensional nature of sport talent. A conceptual framework that acknowledges both genetic and environmental influences and considers the dynamic and multidimensional nature of sport talent is presented. The relevance of this model is highlighted and recommendations for future work provided. It is advocated that talent identification and development programmes should be dynamic and interconnected taking into consideration maturity status and the potential to develop rather than to exclude children at an early age. Finally, more representative real-world tasks should be developed and employed in a multidimensional design to increase the efficacy of talent identification and development programmes.",
"title": ""
},
{
"docid": "b00ec93bf47aab14aa8ced69612fc39a",
"text": "In today’s increasingly rich material life, people are shifting their focus from the physical world to the spiritual world. In order to identify and care for people’s emotions, human-machine interaction systems have been created. The currently available human-machine interaction systems often support the interaction between human and robot under the line-of-sight (LOS) propagation environment, while most communications in terms of human-to-human and human-to-machine are non-LOS (NLOS). In order to break the limitation of the traditional human–machine interaction system, we propose the emotion communication system based on NLOS mode. Specifically, we first define the emotion as a kind of multimedia which is similar to voice and video. The information of emotion can not only be recognized, but can also be transmitted over a long distance. Then, considering the real-time requirement of the communications between the involved parties, we propose an emotion communication protocol, which provides a reliable support for the realization of emotion communications. We design a pillow robot speech emotion communication system, where the pillow robot acts as a medium for user emotion mapping. Finally, we analyze the real-time performance of the whole communication process in the scene of a long distance communication between a mother-child users’ pair, to evaluate the feasibility and effectiveness of emotion communications.",
"title": ""
},
{
"docid": "509fe613e25c9633df2520e4c3a62b74",
"text": "This study, in an attempt to rise above the intricacy of 'being informed on the verge of globalization,' is founded on the premise that Machine Translation (MT) applications searching for an ideal key to find a universal foundation for all natural languages have a restricted say over the translation process at various discourse levels. Our paper favors not judging against the superiority of human translation vs. machine translation or automated translation in non-English speaking settings, but rather referring to the inadequacies and adequacies of MT at certain pragmatic levels, lacking the right sense and dynamic equivalence, but producing syntactically well-formed or meaning-extractable outputs in restricted settings. Reasoning in this way, the present study supports MT before, during, and after translation. It aims at making translators understand that they could cooperate with the software to obtain a synergistic effect. In other words, they could have a say and have an essential part to play in a semi-automated translation process (Rodrigo, 2001). In this respect, semi-automated translation or MT courses should be included in the curricula of translation departments worldwide to keep track of the state of the art as well as make potential translators aware of future trends.",
"title": ""
}
] |
scidocsrr
|
5309ba756583c03f3b0442c4e5836714
|
Learning, Attentional Control, and Action Video Games
|
[
{
"docid": "b1151d3588dc4abff883bef8c60005d1",
"text": "Here, we demonstrate that action video game play enhances subjects' ability in two tasks thought to indicate the number of items that can be apprehended. Using an enumeration task, in which participants have to determine the number of quickly flashed squares, accuracy measures showed a near ceiling performance for low numerosities and a sharp drop in performance once a critical number of squares was reached. Importantly, this critical number was higher by about two items in video game players (VGPs) than in non-video game players (NVGPs). A following control study indicated that this improvement was not due to an enhanced ability to instantly apprehend the numerosity of the display, a process known as subitizing, but rather due to an enhancement in the slower more serial process of counting. To confirm that video game play facilitates the processing of multiple objects at once, we compared VGPs and NVGPs on the multiple object tracking task (MOT), which requires the allocation of attention to several items over time. VGPs were able to successfully track approximately two more items than NVGPs. Furthermore, NVGPs trained on an action video game established the causal effect of game playing in the enhanced performance on the two tasks. Together, these studies confirm the view that playing action video games enhances the number of objects that can be apprehended and suggest that this enhancement is mediated by changes in visual short-term memory skills.",
"title": ""
},
{
"docid": "48d1f79cd3b887cced3d3a2913a25db3",
"text": "Children's use of electronic media, including Internet and video gaming, has increased dramatically to an average in the general population of roughly 3 h per day. Some children cannot control their Internet use leading to increasing research on \"internet addiction.\" The objective of this article is to review the research on ADHD as a risk factor for Internet addiction and gaming, its complications, and what research and methodological questions remain to be addressed. The literature search was done in PubMed and Psychinfo, as well as by hand. Previous research has demonstrated rates of Internet addiction as high as 25% in the population and that it is addiction more than time of use that is best correlated with psychopathology. Various studies confirm that psychiatric disorders, and ADHD in particular, are associated with overuse, with severity of ADHD specifically correlated with the amount of use. ADHD children may be vulnerable since these games operate in brief segments that are not attention demanding. In addition, they offer immediate rewards with a strong incentive to increase the reward by trying the next level. The time spent on these games may also exacerbate ADHD symptoms, if not directly then through the loss of time spent on more developmentally challenging tasks. While this is a major issue for many parents, there is no empirical research on effective treatment. Internet and off-line gaming overuse and addiction are serious concerns for ADHD youth. Research is limited by the lack of measures for youth or parents, studies of children at risk, and studies of impact and treatment.",
"title": ""
},
{
"docid": "040e5e800895e4c6f10434af973bec0f",
"text": "The authors investigated the effect of action gaming on the spatial distribution of attention. The authors used the flanker compatibility effect to separately assess center and peripheral attentional resources in gamers versus nongamers. Gamers exhibited an enhancement in attentional resources compared with nongamers, not only in the periphery but also in central vision. The authors then used a target localization task to unambiguously establish that gaming enhances the spatial distribution of visual attention over a wide field of view. Gamers were more accurate than nongamers at all eccentricities tested, and the advantage held even when a concurrent center task was added, ruling out a trade-off between central and peripheral attention. By establishing the causal role of gaming through training studies, the authors demonstrate that action gaming enhances visuospatial attention throughout the visual field.",
"title": ""
}
] |
[
{
"docid": "fc935bf600e49db18c0a89f0945bac59",
"text": "Psychological positive health and health complaints have long been ignored scientifically. Sleep plays a critical role in children and adolescents development. We aimed at studying the association of sleep duration and quality with psychological positive health and health complaints in children and adolescents from southern Spain. A randomly selected two-phase sample of 380 healthy Caucasian children (6–11.9 years) and 304 adolescents (12–17.9 years) participated in the study. Sleep duration (total sleep time), perceived sleep quality (morning tiredness and sleep latency), psychological positive health and health complaints were assessed using the Health Behaviour in School-aged Children questionnaire. The mean (standard deviation [SD]) reported sleep time for children and adolescents was 9.6 (0.6) and 8.8 (0.6) h/day, respectively. Sleep time ≥10 h was significantly associated with an increased likelihood of reporting no health complaints (OR 2.3; P = 0.005) in children, whereas sleep time ≥9 h was significantly associated with an increased likelihood of overall psychological positive health and no health complaints indicators (OR ~ 2; all P < 0.05) in adolescents. Reporting better sleep quality was associated with an increased likelihood of reporting excellent psychological positive health (ORs between 1.5 and 2.6; all P < 0.05). Furthermore, children and adolescents with no difficulty falling asleep were more likely to report no health complaints (OR ~ 3.5; all P < 0.001). Insufficient sleep duration and poor perceived quality of sleep might directly impact quality of life in children, decreasing general levels of psychological positive health and increasing the frequency of having health complaints.",
"title": ""
},
{
"docid": "40e06996a22e1de4220a09e65ac1a04d",
"text": "Obtaining a compact and discriminative representation of facial and body expressions is a difficult problem in emotion recognition. Part of the difficulty is capturing microexpressions, i.e., short, involuntary expressions that last for only a fraction of a second: at a micro-temporal scale, there are so many other subtle face and body movements that do not convey semantically meaningful information. We present a novel approach to this problem by exploiting the sparsity of the frequent micro-temporal motion patterns. Local space-time features are extracted over the face and body region for a very short time period, e.g., few milliseconds. A codebook of microexpressions is learned from the data and used to encode the features in a sparse manner. This allows us to obtain a representation that captures the most salient motion patterns of the face and body at a micro-temporal scale. Experiments performed on the AVEC 2012 dataset show our approach achieving the best published performance on the arousal dimension based solely on visual features. We also report experimental results on audio-visual emotion recognition, comparing early and late data fusion techniques.",
"title": ""
},
{
"docid": "9d7c8b52e6ca73d31f1e71f8e77023c3",
"text": "NMDA receptors mediate excitatory synaptic transmission and regulate synaptic plasticity in the central nervous system, but their dysregulation is also implicated in numerous brain disorders. Here, we describe GluN2A-selective negative allosteric modulators (NAMs) that inhibit NMDA receptors by stabilizing the apo state of the GluN1 ligand-binding domain (LBD), which is incapable of triggering channel gating. We describe structural determinants of NAM binding in crystal structures of the GluN1/2A LBD heterodimer, and analyses of NAM-bound LBD structures corresponding to active and inhibited receptor states reveal a molecular switch in the modulatory binding site that mediate the allosteric inhibition. NAM binding causes displacement of a valine in GluN2A and the resulting steric effects can be mitigated by the transition from glycine bound to apo state of the GluN1 LBD. This work provides mechanistic insight to allosteric NMDA receptor inhibition, thereby facilitating the development of novel classes NMDA receptor modulators as therapeutic agents.",
"title": ""
},
{
"docid": "27a20bc4614e9ff012813a71b37ee168",
"text": "Pushover analysis was performed on a nineteen story, slender concrete tower building located in San Francisco with a gross area of 430,000 square feet. Lateral system of the building consists of concrete shear walls. The building is newly designed conforming to 1997 Uniform Building Code, and pushover analysis was performed to verify code's underlying intent of Life Safety performance under design earthquake. Procedure followed for carrying out the analysis and results are presented in this paper.",
"title": ""
},
{
"docid": "6e98c2362b504d9f4ab590d4acdc8b8f",
"text": "App marketplaces are distribution platforms for mobile applications that serve as a communication channel between users and developers. These platforms allow users to write reviews about downloaded apps. Recent studies found that such reviews include information that is useful for software evolution. However, the manual analysis of a large amount of user reviews is a tedious and time consuming task. In this work we propose a taxonomy for classifying app reviews into categories relevant for software evolution. Additionally, we describe an experiment that investigates the performance of individual machine learning algorithms and its ensembles for automatically classifying the app reviews. We evaluated the performance of the machine learning techniques on 4550 reviews that were systematically labeled using content analysis methods. Overall, the ensembles had a better performance than the individual classifiers, with an average precision of 0.74 and 0.59 recall.",
"title": ""
},
{
"docid": "df889d8492c4edfd86bbd7937d4695d1",
"text": "We live in a world where there are countless interactions with computer systems in every-day situations. In the most ideal case, this interaction feels as familiar and as natural as the communication we experience with other humans. To this end, an ideal means of communication between a user and a computer system consists of audiovisual speech signals. Audiovisual text-to-speech technology allows the computer system to utter any spoken message towards its users. Over the last decades, a wide range of techniques for performing audiovisual speech synthesis has been developed. This paper gives a comprehensive overview on these approaches using a categorization of the systems based on multiple important aspects that determine the properties of the synthesized speech signals. The paper makes a clear distinction between the techniques that are used to model the virtual speaker and the techniques that are used to generate the appropriate speech gestures. In addition, the paper discusses the evaluation of audiovisual speech synthesizers, it elaborates on the hardware requirements for performing visual speech synthesis and it describes some important future directions that should stimulate the use of audiovisual speech synthesis technology in real-life applications.",
"title": ""
},
{
"docid": "5fd33c0b5b305c9011760f91c75297ca",
"text": "This paper analyzes the root causes of zero-rate output (ZRO) in microelectromechanical system (MEMS) vibratory gyroscopes. ZRO is one of the major challenges for high-performance gyroscopes. The knowledge of its causes is important to minimize ZRO and achieve a robust sensor design. In this paper, a new method to describe an MEMS gyroscope with a parametric state space model is introduced. The model is used to theoretically describe the behavioral influences. A new, more detailed and general gyroscope approximation is used to vary influence parameters, and to verify the method with simulations. The focus is on varying stiffness terms and an extension of the model to other gyroscope approximations is also discussed.",
"title": ""
},
{
"docid": "15e4cfb84801e86211709a8d24979eaa",
"text": "The English Lexicon Project is a multiuniversity effort to provide a standardized behavioral and descriptive data set for 40,481 words and 40,481 nonwords. It is available via the Internet at elexicon.wustl.edu. Data from 816 participants across six universities were collected in a lexical decision task (approximately 3400 responses per participant), and data from 444 participants were collected in a speeded naming task (approximately 2500 responses per participant). The present paper describes the motivation for this project, the methods used to collect the data, and the search engine that affords access to the behavioral measures and descriptive lexical statistics for these stimuli.",
"title": ""
},
{
"docid": "7b5b9990bfef9d2baf28030123359923",
"text": "a r t i c l e i n f o a b s t r a c t This review takes an evolutionary and chronological perspective on the development of strategic human resource management (SHRM) literature. We divide this body of work into seven themes that reflect the directions and trends researchers have taken over approximately thirty years of research. During this time the field took shape, developed rich conceptual foundations, and matured into a domain that has substantial influence on research activities in HR and related management disciplines. We trace how the field has evolved to its current state, articulate many of the major findings and contributions, and discuss how we believe it will evolve in the future. This approach contributes to the field of SHRM by synthesizing work in this domain and by highlighting areas of research focus that have received perhaps enough attention, as well as areas of research focus that, while promising, have remained largely unexamined. 1. Introduction Boxall, Purcell, and Wright (2007) distinguish among three major subfields of human resource management (HRM): micro HRM (MHRM), strategic HRM (SHRM), and international HRM (IHRM). Micro HRM covers the subfunctions of HR policy and practice and consists of two main categories: one with managing individuals and small groups (e.g., recruitment, selection, induction, training and development, performance management, and remuneration) and the other with managing work organization and employee voice systems (including union-management relations). Strategic HRM covers the overall HR strategies adopted by business units and companies and tries to measure their impacts on performance. Within this domain both design and execution issues are examined. International HRM covers HRM in companies operating across national boundaries. Since strategic HRM often covers the international context, we will include those international HRM articles that have a strategic focus. While most of the academic literature on SHRM has been published in the last 30 years, the intellectual roots of the field can be traced back to the 1920s in the U.S. (Kaufman, 2001). The concept of labor as a human resource and the strategic view of HRM policy and practice were described and discussed by labor economists and industrial relations scholars of that period, such as John Commons. Progressive companies in the 1920s intentionally formulated and adopted innovative HR practices that represented a strategic approach to the management of labor. A small, but visibly elite group of employers in this time period …",
"title": ""
},
{
"docid": "795d4e73b3236a2b968609c39ce8f417",
"text": "In this paper, we are introducing an intelligent valet parking management system that guides the cars to autonomously park within a parking lot. The IPLMS for Intelligent Parking Lot Management System, consists of two modules: 1) a model car with a set of micro-controllers and sensors which can scan the environment for suitable parking spot and avoid collision to obstacles, and a Parking Lot Management System (IPLMS) which screens the parking spaces within the parking lot and offers guidelines to the car. The model car has the capability to autonomously maneuver within the parking lot using a fuzzy logic algorithm, and execute parking in the spot determined by the IPLMS, using a parking algorithm. The car receives the instructions from the IPLMS through a wireless communication link. The IPLMS has the flexibility to be adopted by any parking management system, and can potentially save the clients time to look for a parking spot, and/or to stroll from an inaccessible parking space. Moreover, the IPLMS can decrease the financial burden from the parking lot management by offering an easy-to-install system for self-guided valet parking.",
"title": ""
},
{
"docid": "7c6a40af29c1bd8af4b9031ef95a92cf",
"text": "A broadband radial waveguide power amplifier has been designed and fabricated using a spatial power dividing/combining technique. A simple electromagnetic model of this power-dividing/combining structure has been developed. Analysis based on equivalent circuits gives the design formula for perfect power-dividing/ combining circuits. The measured small-signal gain of the eight-device power amplifier is 12 –16.5 dB over a broadband from 7 to 15 GHz. The measured maximum output power at 1-dB compression is 28.6 dBm at 10 GHz, with a power-combining efficiency of about 91%. Furthermore, the performance degradation of this power amplifier because of device failures has also been measured.",
"title": ""
},
{
"docid": "0799b728d04cb7c01b9b527a627962a9",
"text": "This paper presents a design of Two Stage CMOS operational amplifier, which operates at ±2.5V power supply using umc 2μm CMOS technology. The OP-AMP designed is a two-stage CMOS OP-AMP. The OP-AMP is designed to exhibit a unity gain frequency of 4.416MHz and exhibits a gain of 96dB with a 700 phase margin. Design and Simulation has been carried out in LT Spice tools. Keywords— 2 stage CMOS op-amp, design, simulation and results.",
"title": ""
},
{
"docid": "229395d5aa7d0073ee27c4643d668b3d",
"text": "with input from many other team members 6/1/2007 \"DISCLAIMER: The information contained in this paper does not represent the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency (DARPA) or the Department of Defense. DARPA does not guarantee the accuracy or reliability of the information in this paper.\" Austin Robot Technology's (ART's) entry in the DARPA Urban Challenge has two main goals. First and foremost, the team aims to create a fully autonomous vehicle that is capable of safely and robustly meeting all of the criteria laid out in the Technical Evaluation Criteria document [1]. Second, and almost as important, the team aims to produce, educate, and train members of the next generation of computer science and robotics researchers. This technical report documents our significant progress towards both of these goals as of May 2007 and presents a concrete plan to achieve them both fully by the time of the National Qualifying Event (NQE) in October. Specifically, it presents details of both our complete hardware system and our in-progress software, including design rationale, preliminary results, and future plans towards meeting the challenge. In addition, it provides details of the significant undergraduate research component of our efforts and emphasizes the educational value of the project.",
"title": ""
},
{
"docid": "072b36d53de6a1a1419b97a1503f8ecd",
"text": "In classical control of brushless dc (BLDC) motors, flux distribution is assumed trapezoidal and fed current is controlled rectangular to obtain a desired constant torque. However, in reality, this assumption may not always be correct, due to nonuniformity of magnetic material and design trade-offs. These factors, together with current controller limitation, can lead to an undesirable torque ripple. This paper proposes a new torque control method to attenuate torque ripple of BLDC motors with un-ideal back electromotive force (EMF) waveforms. In this method, the action time of pulses, which are used to control the corresponding switches, are calculated in the torque controller regarding actual back EMF waveforms in both normal conduction period and commutation period. Moreover, the influence of finite dc bus supply voltage is considered in the commutation period. Simulation and experimental results are shown that, compared with conventional rectangular current control, the proposed torque control method results in apparent reduction of the torque ripple.",
"title": ""
},
{
"docid": "29f1e1c9c1601ba194ddcf18de804101",
"text": "In this paper, we introduce Waveprint, a novel method for audio identification. Waveprint uses a combination of computer-vision techniques and large-scale-data-stream processing algorithms to create compact fingerprints of audio data that can be efficiently matched. The resulting system has excellent identification capabilities for small snippets of audio that have been degraded in a variety of manners, including competing noise, poor recording quality, and cell-phone playback. We explicitly measure the tradeoffs between performance, memory usage, and computation through extensive experimentation.",
"title": ""
},
{
"docid": "907fe4b941bc70cddf39bc76a522205f",
"text": "We introduce a flexible combination of volume, surface, and line rendering. We employ object-based edge detection because this allows a flexible parametrization of the generated lines. Our techniques were developed mainly for medical applications using segmented patient-individual volume datasets. In addition, we present an evaluation of the generated visualizations with 8 medical professionals and 25 laypersons. Integration of lines in conventional rendering turned out to be appropriate.",
"title": ""
},
{
"docid": "b33e896a23f27a81f04aaeaff2f2350c",
"text": "Nowadays it has become increasingly common for family members to be distributed in different time zones. These time differences pose specific challenges for communication within the family and result in different communication practices to cope with them. To gain an understanding of current challenges and practices, we interviewed people who regularly communicate with immediate family members living in other time zones. We report primary findings from the interviews, and identify design opportunities for improving the experience of cross time zone family communication.",
"title": ""
},
{
"docid": "bc4ce8c0dce6515d1432a6baecef4614",
"text": "The lsemantica command, presented in this paper, implements Latent Semantic Analysis in Stata. Latent Semantic Analysis is a machine learning algorithm for word and text similarity comparison. Latent Semantic Analysis uses Truncated Singular Value Decomposition to derive the hidden semantic relationships between words and texts. lsemantica provides a simple command for Latent Semantic Analysis in Stata as well as complementary commands for text similarity comparison.",
"title": ""
},
{
"docid": "9b9a2a9695f90a6a9a0d800192dd76f6",
"text": "Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.",
"title": ""
},
{
"docid": "b40129a15767189a7a595db89c066cf8",
"text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.",
"title": ""
}
] |
scidocsrr
|
7b317af9879b1d60d43eab91c2c9d5e0
|
Active Tactile Transfer Learning for Object Discrimination in an Unstructured Environment Using Multimodal Robotic Skin
|
[
{
"docid": "217e76cc7d8a7d680b40d5c658460513",
"text": "The reinforcement learning paradigm is a popular way to addr ess problems that have only limited environmental feedback, rather than correctly labeled exa mples, as is common in other machine learning contexts. While significant progress has been made t o improve learning in a single task, the idea oftransfer learninghas only recently been applied to reinforcement learning ta sks. The core idea of transfer is that experience gained in learning t o perform one task can help improve learning performance in a related, but different, task. In t his article we present a framework that classifies transfer learning methods in terms of their capab ilities and goals, and then use it to survey the existing literature, as well as to suggest future direct ions for transfer learning work.",
"title": ""
},
{
"docid": "418a5ef9f06f8ba38e63536671d605c1",
"text": "Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advantage of knowledge coming from previously learned categories, no matter how different these categories might be. We explore a Bayesian implementation of this idea. Object categories are represented by probabilistic models. Prior knowledge is represented as a probability density function on the parameters of these models. The posterior model for an object category is obtained by updating the prior in the light of one or more observations. We test a simple implementation of our algorithm on a database of 101 diverse object categories. We compare category models learned by an implementation of our Bayesian approach to models learned from by maximum likelihood (ML) and maximum a posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.",
"title": ""
}
] |
[
{
"docid": "1c640d17baacffeedf1639457206a05b",
"text": "A low-power inductorless 1:4 DEMUX in 90 nm CMOS is presented. It is capable of operating at 25 Gb/s with a power supply voltage of 1.05V, and the power consumption is 8.9mW. Its area is 29 × 40μm2. The DEMUX consists primarily of latches in differential pseudo-NMOS logic style. This logic style has a near-rail-to-rail logic swing and is much more scalable under low Vdd than the current-mode logic, commonly used for high-performance SerDes. It is also compatible with the conventional CMOS logic, and direct connection with CMOS logic gates is possible without logic level conversion. The low power, small footprint, and reasonable speed could be beneficial for chip-to-chip communication within a multi-chip module.",
"title": ""
},
{
"docid": "7d6774ec9ad7a9926f2dcad3f8034ac8",
"text": "A wearable integrated health-monitoring system is presented in this paper. The system is based on a multisensor fusion approach. It consists of a chest-worn device that embeds a controller board, an electrocardiogram (ECG) sensor, a temperature sensor, an accelerometer, a vibration motor, a colour- changing light-emitting diode (LED) and a push-button. This multi-sensor device allows for performing biometric and medical monitoring applications. Distinctive haptic feedback patterns can be actuated by means of the embedded vibration motor according to the user's health state. The embedded colour-changing LED is employed to provide the wearer with an additional intuitive visual feedback of the current health state. The push-button provided can be pushed by the user to report a potential emergency condition. The collected biometric information can be used to monitor the health state of the person involved in real-time or to get sensitive data to be subsequently analysed for medical diagnosis. In this preliminary work, the system architecture is presented. As a possible application scenario, the health-monitoring of offshore operators is considered. Related initial simulations and experiments are carried out to validate the efficiency of the proposed technology. In particular, the system reduces risk, taking into consideration assessments based on the individual and on overall potentially-harmful situations.",
"title": ""
},
{
"docid": "d3c91b43a4ac5b50f2faa02811616e72",
"text": "BACKGROUND\nSleep disturbance is common among disaster survivors with posttraumatic stress symptoms but is rarely addressed as a primary therapeutic target. Sleep Dynamic Therapy (SDT), an integrated program of primarily evidence-based, nonpharmacologic sleep medicine therapies coupled with standard clinical sleep medicine instructions, was administered to a large group of fire evacuees to treat posttraumatic insomnia and nightmares and determine effects on posttraumatic stress severity.\n\n\nMETHOD\nThe trial was an uncontrolled, prospective pilot study of SDT for 66 adult men and women, 10 months after exposure to the Cerro Grande Fire. SDT was provided to the entire group in 6, weekly, 2-hour sessions. Primary and secondary outcomes included validated scales for insomnia, nightmares, posttraumatic stress, anxiety, and depression, assessed at 2 pretreatment baselines on average 8 weeks apart, weekly during treatment, posttreatment, and 12-week follow-up.\n\n\nRESULTS\nSixty-nine participants completed both pretreatment assessment, demonstrating small improvement in symptoms prior to starting SDT. Treatment and posttreatment assessments were completed by 66 participants, and 12-week follow-up was completed by 59 participants. From immediate pretreatment (second baseline) to posttreatment, all primary and secondary scales decreased significantly (all p values < .0001) with consistent medium-sized effects (Cohen's d = 0.29 to 1.09), and improvements were maintained at follow-up. Posttraumatic stress disorder subscales demonstrated similar changes: intrusion (d = 0.56), avoidance (d = 0.45), and arousal (d = 0.69). Fifty-three patients improved, 10 worsened, and 3 reported no change in posttraumatic stress.\n\n\nCONCLUSION\nIn an uncontrolled pilot study, chronic sleep symptoms in fire disaster evacuees were treated with SDT, which was associated with substantive and stable improvements in sleep disturbance, posttraumatic stress, anxiety, and depression 12 weeks after initiating treatment.",
"title": ""
},
{
"docid": "b4c8a34f9bda4b232d73ee7eafb30f88",
"text": "Bargaining with reading habit is no need. Reading is not kind of something sold that you can take or not. It is a thing that will change your life to life better. It is the thing that will give you many things around the world and this universe, in the real world and here after. As what will be given by this artificial intelligence a new synthesis, how can you bargain with the thing that has many benefits for you?",
"title": ""
},
{
"docid": "49b6641e3cee2e984fc2f4b740d1959a",
"text": "Recently, ERP implementation has been transformed from the business side into educational side. The failure implementation of ERP in higher education is referred to many factors. In addition to the increment of the level of risk that invoked on the study of ERP, the study of critical success factors has great effects on making system fail or success in implementation. Despite of efforts that had been done to enhance the implementation of ERP; it still suffers from failure, especially in higher education. The present study investigates that ERP system will succeed if we employ a set of factors such as clear vision and objectives, top management support and commitment, clear business process, information flow and organizational structure, budget size and cost, integrated department and solve the problem of human resources management, project management, training and education, careful change management and effective communication and connection in ERP higher education system. According to the preliminary results of this study, these factors help in successful implementation of ERP in higher education. Keywords— ERP, CSFs, Higher education, Successful ERP implementation, institutions.",
"title": ""
},
{
"docid": "834ba5081965683538fe1e931e9e4af0",
"text": "An Exploratory Study on Issues and Challenges of Agile Software Develop m nt with Scrum by Juyun Joey Cho, Doctor of Philosophy Utah State University, 2010 Major Professor: Dr. David H. Olsen Department: Management Information Systems The purpose of this dissertation was to explore critical issues and challenges t hat might arise in agile software development processes with Scrum. It a lso sought to provide management guidelines to help organizations avoid and overcome barriers in adopting the Scrum method as a future software development method. A qualitative researc h method design was used to capture the knowledge of practitioners and scrutinize the Scrum software development process in its natural settings. An in-depth case s tudy was conducted in two organizations where the Scrum method was fully integrated in every aspect of two organizations’ software development processes. One organizat ion provides large-scale and mission-critical applications and the other provides smalland mediumscale applications. Differences between two organizations provided useful c ontrasts for the data analysis. Data were collected through an email survey, observations, documents, and semistructured face-to-face interviews. The email survey was used to re fine interview",
"title": ""
},
{
"docid": "0e8439fb8942b899a6e931c2b82a411d",
"text": "This paper describes and evaluates the Con dence-based Dual Reinforcement QRouting algorithm (CDRQ-Routing) for adaptive packet routing in communication networks. CDRQ-Routing is based on an application of the Q-learning framework to network routing, as rst proposed by Littman and Boyan (1993). The main contribution of CDRQ-routing is an increased quantity and an improved quality of exploration. Compared to Q-Routing, the state-of-the-art adaptive Bellman-Ford Routing algorithm, and the non-adaptive shortest path method, CDRQ-Routing learns superior policies signi cantly faster. Moreover, the overhead due to exploration is shown to be insigni cant compared to the improvements achieved, which makes CDRQ-Routing a practical method for real communication networks.",
"title": ""
},
{
"docid": "41a15d3dcca1ff835b5d983a8bb5343f",
"text": "and is made available as an electronic reprint (preprint) with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. ABSTRACT We describe the architecture and design of a through-the-wall radar. The radar is applied for the detection and localization of people hidden behind obstacles. It implements a new adaptive processing technique for people detection, which is introduced in this article. This processing technique is based on exponential averaging with adopted weighting coefficients. Through-the-wall detection and localization of a moving person is demonstrated by a measurement example. The localization relies on the time-of-flight approach.",
"title": ""
},
{
"docid": "bc0fa704763199526c4f28e40fa11820",
"text": "GPFS is a distributed file system run on some of the largest supercomputers and clusters. Through it's deployment, the authors have been able to gain a number of key insights into the methodology of developing a distributed file system which can reliably scale and maintain POSIX semantics. Achieving the necessary throughput requires parallel access for reading, writing and updating metadata. It is a process that is accomplished mostly through distributed locking.",
"title": ""
},
{
"docid": "3159879f34a093d38e82dba61b92d74e",
"text": "The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm A from scratch for each new type of benchmark instances, here we propose to exploit information about A’s performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.",
"title": ""
},
{
"docid": "8522a5a6727a941611dbbebbe4bb7c11",
"text": "Microblogging encompasses both user-generated content and behavior. When modeling microblogging data, one has to consider personal and background topics, as well as how these topics generate the observed content and behavior. In this article, we propose the Generalized Behavior-Topic (GBT) model for simultaneously modeling background topics and users’ topical interest in microblogging data. GBT considers multiple topical communities (or realms) with different background topical interests while learning the personal topics of each user and the user’s dependence on realms to generate both content and behavior. This differentiates GBT from other previous works that consider either one realm only or content data only. By associating user behavior with the latent background and personal topics, GBT helps to model user behavior by the two types of topics. GBT also distinguishes itself from other earlier works by modeling multiple types of behavior together. Our experiments on two Twitter datasets show that GBT can effectively mine the representative topics for each realm. We also demonstrate that GBT significantly outperforms other state-of-the-art models in modeling content topics and user profiling.",
"title": ""
},
{
"docid": "65d0a8f4838e84ebbababbfaab3ac6a1",
"text": "The robust alignment of images and scenes seen from widely different viewpoints is an important challenge for camera and scene reconstruction. This paper introduces a novel class of viewpoint independent local features for robust registration and novel algorithms to use the rich information of the new features for 3D scene alignment and large scale scene reconstruction. The key point of our approach consists of leveraging local shape information for the extraction of an invariant feature descriptor. The advantages of the novel viewpoint invariant patch (VIP) are: that the novel features are invariant to 3D camera motion and that a single VIP correspondence uniquely defines the 3D similarity transformation between two scenes. In the paper we demonstrate how to use the properties of the VIPs in an efficient matching scheme for 3D scene alignment. The algorithm is based on a hierarchical matching method which tests the components of the similarity transformation sequentially to allow efficient matching and 3D scene alignment. We evaluate the novel features on real data with known ground truth information and show that the features can be used to reconstruct large scale urban scenes.",
"title": ""
},
{
"docid": "640824047e480ef5582d140b6595dbd9",
"text": "A wideband transition from coplanar waveguide (CPW) to substrate integrated waveguide (SIW) is proposed and presented in the 50 GHz frequency range. Electrically thick alumina was used in this case, representative for other high-permittivity substrates such as semiconductors. Simulations predict less than -15 dB return loss within a 35 % bandwidth. CPW probe measurements were carried out and 40 % bandwidth were achieved at -0.5 dB insertion loss for a single transition. Modified SIW via configurations being suitable for simplified fabrication on electrically thick substrates in the upper millimeter-wave spectrum are discussed in the second part.",
"title": ""
},
{
"docid": "2e9d5a0f975a42e79a5c7625fc246502",
"text": "e-Tourism is a tourist recommendation and planning application to assist users on the organization of a leisure and tourist agenda. First, a recommender system offers the user a list of the city places that are likely of interest to the user. This list takes into account the user demographic classification, the user likes in former trips and the preferences for the current visit. Second, a planning module schedules the list of recommended places according to their temporal characteristics as well as the user restrictions; that is the planning system determines how and when to perform the recommended activities. This is a very relevant feature that most recommender systems lack as it allows the user to have the list of recommended activities organized as an agenda, i.e. to have a totally executable plan.",
"title": ""
},
{
"docid": "338f3693a38930c89410bcae27cf4507",
"text": "ABSTRACT The purpose of this study was to understand the perceptions of mothers of children with autism spectrum disorder (ASD) who participated in 10 one-hour coaching sessions. Coaching occurred between an occupational therapist and mother and consisted of information sharing, action, and reflection. Researchers asked 10 mothers six open-ended questions with follow-up probes related to their experiences with coaching. Themes were identified, labeled, and categorized. Themes emerged related to relationships, analysis, reflection, mindfulness, and self-efficacy. Findings indicate that parents perceive the therapist-parent relationship, along with analysis and reflection, as core features that facilitate increased mindfulness and self-efficacy. The findings suggest that how an intervention is provided can lead to positive outcomes, including increased mindfulness and self-efficacy.",
"title": ""
},
{
"docid": "db5ad307f39ebc6d20c084333076cc49",
"text": "We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages. Our method combines the advantages of contextual word representations with those of multilingual representation learning. We produce language models from dissimilar language pairs (English/Arabic and English/Chinese) and use them in dependency parsing, semantic role labeling, and named entity recognition, with comparisons to monolingual and noncontextual variants. Our results provide further evidence for the benefits of polyglot learning, in which representations are shared across multiple languages.",
"title": ""
},
{
"docid": "7c8412c5a7c71fe76105983d3bf7e16d",
"text": "A novel wideband dual-cavity-backed circularly polarized (CP) crossed dipole antenna is presented in this letter. The exciter of the antenna comprises two classical orthogonal straight dipoles for a simple design. Dual-cavity structure is employed to achieve unidirectional radiation and improve the broadside gain. In particular, the rim edges of the cavity act as secondary radiators, which contribute to significantly enhance the overall CP performance of the antenna. The final design with an overall size of 0.57λ<sub>o</sub> × 0.57λ<sub>o</sub> × 0.24λ<sub>o</sub> where λ<sub>o</sub> is the free-space wavelength at the lowest CP operating frequency of 2.0 GHzb yields a measured –10 dB impedance bandwidth (BW) of 79.4% and 3 dB axial-ratio BW of 66.7%. The proposed antenna exhibits right-handed circular polarization with a maximum broadside gain of about 9.7 dBic.",
"title": ""
},
{
"docid": "b27b164a7ff43b8f360167e5f886f18a",
"text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.",
"title": ""
},
{
"docid": "5d519c0107838f73fabd48426bc895d5",
"text": "We propose new methods to speed up convergence of the Alternating Direction Method of Multipliers (ADMM), a common optimization tool in the context of large scale and distributed learning. The proposed method accelerates the speed of convergence by automatically deciding the constraint penalty needed for parameter consensus in each iteration. In addition, we also propose an extension of the method that adaptively determines the maximum number of iterations to update the penalty. We show that this approach effectively leads to an adaptive, dynamic network topology underlying the distributed optimization. The utility of the new penalty update schemes is demonstrated on both synthetic and real data, including an instance of the probabilistic matrix factorization task known as the structurefrom-motion problem.",
"title": ""
},
{
"docid": "4d9312d22dcc37933d0108fbfacd1c38",
"text": "This study focuses on the use of different types of shear reinforcement in the reinforced concrete beams. Four different types of shear reinforcement are investigated; traditional stirrups, welded swimmer bars, bolted swimmer bars, and u-link bolted swimmer bars. Beam shear strength as well as beam deflection are the main two factors considered in this study. Shear failure in reinforced concrete beams is one of the most undesirable modes of failure due to its rapid progression. This sudden type of failure made it necessary to explore more effective ways to design these beams for shear. The reinforced concrete beams show different behavior at the failure stage in shear compare to the bending, which is considered to be unsafe mode of failure. The diagonal cracks that develop due to excess shear forces are considerably wider than the flexural cracks. The cost and safety of shear reinforcement in reinforced concrete beams led to the study of other alternatives. Swimmer bar system is a new type of shear reinforcement. It is a small inclined bars, with its both ends bent horizontally for a short distance and welded or bolted to both top and bottom flexural steel reinforcement. Regardless of the number of swimmer bars used in each inclined plane, the swimmer bars form plane-crack interceptor system instead of bar-crack interceptor system when stirrups are used. Several reinforced concrete beams were carefully prepared and tested in the lab. The results of these tests will be presented and discussed. The deflection of each beam is also measured at incrementally increased applied load.",
"title": ""
}
] |
scidocsrr
|
065e418647d7343acda7cb1216986f79
|
The impact of emotionality and self-disclosure on online dating versus traditional dating
|
[
{
"docid": "817471946fbe8b23d195c4fea8967549",
"text": "The purpose of this research project was to investigate possible sex, ethnicity, and age group differences involving the information placed in Internet dating ads, and to contrast the findings with predictions from evolutionary theory (e.g., women being more selective than men) and with findings from previous studies involving heterosexual dating ads placed in newspapers and magazines. Of particular interest were the types and number of characteristics sought in a dating partner. Results generally supported predictions from evolutionary theory. Women listed more desired characteristics for a partner than did men. Women focused more on non-physical attributes such as ambition and character than did men, and men focused more on youth and attractiveness than did women. There was; however, considerable similarity in terms of the five most desired attributes listed by both men and women. Women listed the following desired characteristics in men most often: humor, honesty, caring, openness, and personality. Men desired the following: affection, humor, honesty, openness, and attractive women. These desired characteristics were also significantly different from those found in recent studies which looked at dating ads placed in newspapers.",
"title": ""
}
] |
[
{
"docid": "5f41bc81a483dd4deb5e70272d32ac77",
"text": "In this paper, we present the design and evaluation of a novel soft cable-driven exosuit that can apply forces to the body to assist walking. Unlike traditional exoskeletons which contain rigid framing elements, the soft exosuit is worn like clothing, yet can generate moments at the ankle and hip with magnitudes of 18% and 30% of those naturally generated by the body during walking, respectively. Our design uses geared motors to pull on Bowden cables connected to the suit near the ankle. The suit has the advantages over a traditional exoskeleton in that the wearer's joints are unconstrained by external rigid structures, and the worn part of the suit is extremely light, which minimizes the suit's unintentional interference with the body's natural biomechanics. However, a soft suit presents challenges related to actuation force transfer and control, since the body is compliant and cannot support large pressures comfortably. We discuss the design of the suit and actuation system, including principles by which soft suits can transfer force to the body effectively and the biological inspiration for the design. For a soft exosuit, an important design parameter is the combined effective stiffness of the suit and its interface to the wearer. We characterize the exosuit's effective stiffness, and present preliminary results from it generating assistive torques to a subject during walking. We envision such an exosuit having broad applicability for assisting healthy individuals as well as those with muscle weakness.",
"title": ""
},
{
"docid": "6554f662f667b8b53ad7b75abfa6f36f",
"text": "present paper introduces an innovative approach to automatically grade the disease on plant leaves. The system effectively inculcates Information and Communication Technology (ICT) in agriculture and hence contributes to Precision Agriculture. Presently, plant pathologists mainly rely on naked eye prediction and a disease scoring scale to grade the disease. This manual grading is not only time consuming but also not feasible. Hence the current paper proposes an image processing based approach to automatically grade the disease spread on plant leaves by employing Fuzzy Logic. The results are proved to be accurate and satisfactory in contrast with manual grading. Keywordscolor image segmentation, disease spot extraction, percent-infection, fuzzy logic, disease grade. INTRODUCTION The sole area that serves the food needs of the entire human race is the Agriculture sector. It has played a key role in the development of human civilization. Plants exist everywhere we live, as well as places without us. Plant disease is one of the crucial causes that reduces quantity and degrades quality of the agricultural products. Plant Pathology is the scientific study of plant diseases caused by pathogens (infectious diseases) and environmental conditions (physiological factors). It involves the study of pathogen identification, disease etiology, disease cycles, economic impact, plant disease epidemiology, plant disease resistance, pathosystem genetics and management of plant diseases. Disease is impairment to the normal state of the plant that modifies or interrupts its vital functions such as photosynthesis, transpiration, pollination, fertilization, germination etc. Plant diseases have turned into a nightmare as it can cause significant reduction in both quality and quantity of agricultural products [2]. Information and Communication Technology (ICT) application is going to be implemented as a solution in improving the status of the agriculture sector [3]. Due to the manifestation and developments in the fields of sensor networks, robotics, GPS technology, communication systems etc, precision agriculture started emerging [10]. The objectives of precision agriculture are profit maximization, agricultural input rationalization and environmental damage reduction by adjusting the agricultural practices to the site demands. In the area of disease management, grade of the disease is determined to provide an accurate and precision treatment advisory. EXISTING SYSTEM: MANUAL GRADING Presently the plant pathologists mainly rely on the naked eye prediction and a disease scoring scale to grade the disease on leaves. There are some problems associated with this manual grading. Diseases are inevitable in plants. When a plant gets affected by the disease, a treatment advisory is required to cure the Arun Kumar R et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1709-1716 IJCTA | SEPT-OCT 2011 Available [email protected] 1709 ISSN:2229-6093",
"title": ""
},
{
"docid": "66f17513486e4d25c9be36e71aecbbf8",
"text": "Fuzz testing is an active testing technique which consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? What kind of anomaly to introduce? Where to observe its effects? etc. Different test contexts depending on the degree of knowledge assumed about the target: recompiling the application (white-box), interacting only at the target interface (blackbox), dynamically instrumenting a binary (grey-box). In this paper, we focus on black-box test contest, and specifically address the questions: How to obtain a notion of coverage on unstructured inputs? How to capture human testers intuitions and use it for the fuzzing? How to drive the search in various directions? We specifically address the problems of detecting Memory Corruption in PDF interpreters and Cross Site Scripting (XSS) in web applications. We detail our approaches which use genetic algorithm, inference and anti-random testing. We empirically evaluate our implementations of XSS fuzzer KameleonFuzz and of PDF fuzzer ShiftMonkey.",
"title": ""
},
{
"docid": "fd4bd9edcaff84867b6e667401aa3124",
"text": "We give suggestions for the presentation of research results from frequentist, information-theoretic, and Bayesian analysis paradigms, followed by several general suggestions. The information-theoretic and Bayesian methods offer alternative approaches to data analysis and inference compared to traditionally used methods. Guidance is lacking on the presentation of results under these alternative procedures and on nontesting aspects of classical frequentist methods of statistical analysis. Null hypothesis testing has come under intense criticism. We recommend less reporting of the results of statistical tests of null hypotheses in cases where the null is surely false anyway, or where the null hypothesis is of little interest to science or management. JOURNAL OF WILDLIFE MANAGEMENT 65(3):373-378",
"title": ""
},
{
"docid": "9b2b04acbbf5c847885c37c448fb99c8",
"text": "We address the problem of substring searchable encryption. A single user produces a big stream of data and later on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails a suffix array based index design, which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string n. Moreover, we implemented our scheme and the state of the art protocol \\citeChase to demonstrate the performance advantage of our solution with precise benchmark results.",
"title": ""
},
{
"docid": "8a56b4d4f69466aee0d5eff0c09cd514",
"text": "This paper explores how a robot’s physical presence affects human judgments of the robot as a social partner. For this experiment, participants collaborated on simple book-moving tasks with a humanoid robot that was either physically present or displayed via a live video feed. Multiple tasks individually examined the following aspects of social interaction: greetings, cooperation, trust, and personal space. Participants readily greeted and cooperated with the robot whether present physically or in live video display. However, participants were more likely both to fulfill an unusual request and to afford greater personal space to the robot when it was physically present, than when it was shown on live video. The same was true when the live video displayed robot’s gestures were augmented with disambiguating 3-D information. Questionnaire data support these behavioral findings and also show that participants had an overall more positive interaction with the physically present",
"title": ""
},
{
"docid": "82d4b2aa3e3d3ec10425c6250268861c",
"text": "Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of “Online Deep Learning” (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings. In this paper, we present a new online deep learning framework that attempts to tackle the challenges by learning DNN models of adaptive depth from a sequence of training data in an online learning setting. In particular, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy of our method on large-scale data sets, including both stationary and concept drifting scenarios.",
"title": ""
},
{
"docid": "0d0eb6ed5dff220bc46ffbf87f90ee59",
"text": "Objectives. The aim of this review was to investigate whether alternating hot–cold water treatment is a legitimate training tool for enhancing athlete recovery. A number of mechanisms are discussed to justify its merits and future research directions are reported. Alternating hot–cold water treatment has been used in the clinical setting to assist in acute sporting injuries and rehabilitation purposes. However, there is overwhelming anecdotal evidence for it’s inclusion as a method for post exercise recovery. Many coaches, athletes and trainers are using alternating hot–cold water treatment as a means for post exercise recovery. Design. A literature search was performed using SportDiscus, Medline and Web of Science using the key words recovery, muscle fatigue, cryotherapy, thermotherapy, hydrotherapy, contrast water immersion and training. Results. The physiologic effects of hot–cold water contrast baths for injury treatment have been well documented, but its physiological rationale for enhancing recovery is less known. Most experimental evidence suggests that hot–cold water immersion helps to reduce injury in the acute stages of injury, through vasodilation and vasoconstriction thereby stimulating blood flow thus reducing swelling. This shunting action of the blood caused by vasodilation and vasoconstriction may be one of the mechanisms to removing metabolites, repairing the exercised muscle and slowing the metabolic process down. Conclusion. To date there are very few studies that have focussed on the effectiveness of hot–cold water immersion for post exercise treatment. More research is needed before conclusions can be drawn on whether alternating hot–cold water immersion improves recuperation and influences the physiological changes that characterises post exercise recovery. q 2003 Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a05a953097e5081670f26e85c4b8e397",
"text": "In European science and technology policy, various styles have been developed and institutionalised to govern the ethical challenges of science and technology innovations. In this paper, we give an account of the most dominant styles of the past 30 years, particularly in Europe, seeking to show their specific merits and problems. We focus on three styles of governance: a technocratic style, an applied ethics style, and a public participation style. We discuss their merits and deficits, and use this analysis to assess the potential of the recently established governance approach of 'Responsible Research and Innovation' (RRI). Based on this analysis, we reflect on the current shaping of RRI in terms of 'doing governance'.",
"title": ""
},
{
"docid": "6f0283efa932663c83cc2c63d19fd6cf",
"text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.",
"title": ""
},
{
"docid": "17ec5256082713e85c819bb0a0dd3453",
"text": "Scholarly documents contain multiple figures representing experimental findings. These figures are generated from data which is not reported anywhere else in the paper. We propose a modular architecture for analyzing such figures. Our architecture consists of the following modules: 1. An extractor for figures and associated metadata (figure captions and mentions) from PDF documents; 2. A Search engine on the extracted figures and metadata; 3. An image processing module for automated data extraction from the figures and 4. A natural language processing module to understand the semantics of the figure. We discuss the challenges in each step, report an extractor algorithm to extract vector graphics from scholarly documents and a classification algorithm for figures. Our extractor algorithm improves the state of the art by more than 10% and the classification process is very scalable, yet achieves 85\\% accuracy. We also describe a semi-automatic system for data extraction from figures which is integrated with our search engine to improve user experience.",
"title": ""
},
{
"docid": "99fa507d3b36e1a42f0dbda5420e329a",
"text": "Reference Points and Effort Provision A key open question for theories of reference-dependent preferences is what determines the reference point. One candidate is expectations: what people expect could affect how they feel about what actually occurs. In a real-effort experiment, we manipulate the rational expectations of subjects and check whether this manipulation influences their effort provision. We find that effort provision is significantly different between treatments in the way predicted by models of expectation-based reference-dependent preferences: if expectations are high, subjects work longer and earn more money than if expectations are low. JEL Classification: C91, D01, D84, J22",
"title": ""
},
{
"docid": "d593b96d11dd8a3516816d85fce5c7a0",
"text": "This paper presents an approach for the integration of Virtual Reality (VR) and Computer-Aided Design (CAD). Our general goal is to develop a VR–CAD framework making possible intuitive and direct 3D edition on CAD objects within Virtual Environments (VE). Such a framework can be applied to collaborative part design activities and to immersive project reviews. The cornerstone of our approach is a model that manages implicit editing of CAD objects. This model uses a naming technique of B-Rep components and a set of logical rules to provide straight access to the operators of Construction History Graphs (CHG). Another set of logical rules and the replay capacities of CHG make it possible to modify in real-time the parameters of these operators according to the user's 3D interactions. A demonstrator of our model has been developed on the OpenCASCADE geometric kernel, but we explain how it can be applied to more standard CAD systems such as CATIA. We combined our VR–CAD framework with multimodal immersive interaction (using 6 DoF tracking, speech and gesture recognition systems) to gain direct and intuitive deformation of the objects' shapes within a VE, thus avoiding explicit interactions with the CHG within a classical WIMP interface. In addition, we present several haptic paradigms specially conceptualized and evaluated to provide an accurate perception of B-Rep components and to help the user during his/her 3D interactions. Finally, we conclude on some issues for future researches in the field of VR–CAD integration.",
"title": ""
},
{
"docid": "d16a787399db6309ab4563f4265e91b9",
"text": "The real-time information on news sites, blogs and social networking sites changes dynamically and spreads rapidly through the Web. Developing methods for handling such information at a massive scale requires that we think about how information content varies over time, how it is transmitted, and how it mutates as it spreads.\n We describe the News Information Flow Tracking, Yay! (NIFTY) system for large scale real-time tracking of \"memes\" - short textual phrases that travel and mutate through the Web. NIFTY is based on a novel highly-scalable incremental meme-clustering algorithm that efficiently extracts and identifies mutational variants of a single meme. NIFTY runs orders of magnitude faster than our previous Memetracker system, while also maintaining better consistency and quality of extracted memes.\n We demonstrate the effectiveness of our approach by processing a 20 terabyte dataset of 6.1 billion blog posts and news articles that we have been continuously collecting for the last four years. NIFTY extracted 2.9 billion unique textual phrases and identified more than 9 million memes. Our meme-tracking algorithm was able to process the entire dataset in less than five days using a single machine. Furthermore, we also provide a live deployment of the NIFTY system that allows users to explore the dynamics of online news in near real-time.",
"title": ""
},
{
"docid": "9bf951269881138b9fae1d345be5b2e8",
"text": "A biofuel from any biodegradable formation process such as a food waste bio-digester plant is a mixture of several gases such as methane (CH4), carbon dioxide (CO2), hydrogen sulfide (H2S), ammonia (NH3) and impurities like water and dust particles. The results are reported of a parametric study of the process of separation of methane, which is the most important gas in the mixture and usable as a biofuel, from particles and H2S. A cyclone, which is a conventional, economic and simple device for gas-solid separation, is considered based on the modification of three Texas A&M cyclone designs (1D2D, 2D2D and 1D3D) by the inclusion of an air inlet tube. A parametric sizing is performed of the cyclone for biogas purification, accounting for the separation of hydrogen sulfide (H2S) and dust particles from the biofuel. The stochiometric oxidation of H2S to form elemental sulphur is considered a useful cyclone design criterion. The proposed design includes geometric parameters and several criteria for quantifying the performance of cyclone separators such as the Lapple Model for minimum particle diameter collected, collection efficiency and pressure drop. For biogas volumetric flow rates between 0 and 1 m/s and inlet flow velocities of 12 m/s, 15 m/s and 18 m/s for the 1D2D, 2D2D and 1D3D cyclones, respectively, it is observed that the 2D2D configuration is most economic in terms of sizing (total height and diameter of cyclone). The 1D2D configuration experiences the lowest pressure drop. A design algorithm coupled with a user-friendly graphics interface is developed on the MATLAB platform, providing a tool for sizing and designing suitable cyclones.",
"title": ""
},
{
"docid": "3eb50289c3b28d2ce88052199d40bf8d",
"text": "Transportation Problem is an important aspect which has been widely studied in Operations Research domain. It has been studied to simulate different real life problems. In particular, application of this Problem in NPHard Problems has a remarkable significance. In this Paper, we present a comparative study of Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy Logic is a computational paradigm that generalizes classical two-valued logic for reasoning under uncertainty. In order to achieve this, the notation of membership in a set needs to become a matter of degree. By doing this we accomplish two things viz., (i) ease of describing human knowledge involving vague concepts and (ii) enhanced ability to develop cost-effective solution to real-world problem. The multi-valued nature of Fuzzy Sets allows handling uncertain and vague information. It is a model-less approach and a clever disguise of Probability Theory. We give comparative simulation results of both approaches and discuss the Computational Complexity. To the best of our knowledge, this is the first work on comparative study of Transportation Problem using Probabilistic and Fuzzy Uncertainties.",
"title": ""
},
{
"docid": "95fe3badecc7fa92af6b6aa49b6ff3b2",
"text": "As low-resolution position sensors, a high placement accuracy of Hall-effect sensors is hard to achieve. Accordingly, a commutation angle error is generated. The commutation angle error will inevitably increase the loss of the low inductance motor and even cause serious consequence, which is the abnormal conduction of a freewheeling diode in the unexcited phase especially at high speed. In this paper, the influence of the commutation angle error on the power loss for the high-speed brushless dc motor with low inductance and nonideal back electromotive force in a magnetically suspended control moment gyro (MSCMG) is analyzed in detail. In order to achieve low steady-state loss of an MSCMG for space application, a straightforward method of self-compensation of commutation angle based on dc-link current is proposed. Both simulation and experimental results confirm the feasibility and effectiveness of the proposed method.",
"title": ""
},
{
"docid": "17ab4797666afed3a37a8761fcbb0d1e",
"text": "In this paper, we propose a CPW fed triple band notch UWB antenna array with EBG structure. The major consideration in the antenna array design is the mutual coupling effect that exists within the elements. The use of Electromagnetic Band Gap structures in the antenna arrays can limit the coupling by suppresssing the surface waves. The triple band notch antenna consists of three slots which act as notch resonators for a specific band of frequencies, the C shape slot at the main radiator (WiMax-3.5GHz), a pair of CSRR structures at the ground plane(WLAN-5.8GHz) and an inverted U shaped slot in the center of the patch (Satellite Service bands-8.2GHz). The main objective is to reduce mutual coupling which in turn improves the peak realized gain, directivity.",
"title": ""
},
{
"docid": "bb01b5e24d7472ab52079dcb8a65358d",
"text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.",
"title": ""
},
{
"docid": "ff41327bad272a6d80d4daba25b6472f",
"text": "The dense very deep submicron (VDSM) system on chips (SoC) face a serious limitation in performance due to reverse scaling of global interconnects. Interconnection techniques which decrease delay, delay variation and ensure signal integrity, play an important role in the growth of the semiconductor industry into future generations. Current-mode low-swing interconnection techniques provide an attractive alternative to conventional full-swing voltage mode signaling in terms of delay, power and noise immunity. In this paper, we present a new current-mode low-swing interconnection technique which reduces the delay and delay variations in global interconnects. Extensive simulations for performance of our circuit under crosstalk, supply voltage, process and temperature variations were performed. The results indicate significant savings in power, reduction in delay and increase in noise immunity compared to other techniques.",
"title": ""
}
] |
scidocsrr
|
07efc7a26490f18decfef895801feffa
|
Robust Interpolation of Correspondences for Large Displacement Optical Flow
|
[
{
"docid": "226d6904cc052f300b32b29f4f800574",
"text": "Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains realtime performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.",
"title": ""
}
] |
[
{
"docid": "7431f48f8792d74e43f7df13795d6338",
"text": "Automatic generation of paraphrases from a given sentence is an important yet challenging task in natural language processing (NLP). In this paper, we present a deep reinforcement learning approach to paraphrase generation. Specifically, we propose a new framework for the task, which consists of a generator and an evaluator, both of which are learned from data. The generator, built as a sequenceto-sequence learning model, can produce paraphrases given a sentence. The evaluator, constructed as a deep matching model, can judge whether two sentences are paraphrases of each other. The generator is first trained by deep learning and then further fine-tuned by reinforcement learning in which the reward is given by the evaluator. For the learning of the evaluator, we propose two methods based on supervised learning and inverse reinforcement learning respectively, depending on the type of available training data. Experimental results on two datasets demonstrate the proposed models (the generators) can produce more accurate paraphrases and outperform the stateof-the-art methods in paraphrase generation in both automatic evaluation and human evaluation.",
"title": ""
},
{
"docid": "6bc0f30f83ead5672898baeadb536c0c",
"text": "Research on affective design has expanded considerably in recent years. The focus has primarily been on consumer products, such as mobile phones. Here we discuss a model for affective design of vehicles, with special emphasis on cars. It was developed in CATER - a research program intended to support mass customization of vehicles. Cars are different from consumer products because they represent major investments, and customers will typically take about a year to decide what to buy. During this time the customer goes through several motivational stages from belief to attitude to intention and behavior, which will affect his/her priorities. The model drives development of the citarasa engineering system.",
"title": ""
},
{
"docid": "ecda448df7b28ea5e453c179206e91a4",
"text": "The cloud infrastructure provider (CIP) in a cloud computing platform must provide security and isolation guarantees to a service provider (SP), who builds the service(s) for such a platform. We identify last level cache (LLC) sharing as one of the impediments to finer grain isolation required by a service, and advocate two resource management approaches to provide performance and security isolation in the shared cloud infrastructure - cache hierarchy aware core assignment and page coloring based cache partitioning. Experimental results demonstrate that these approaches are effective in isolating cache interference impacts a VM may have on another VM. We also incorporate these approaches in the resource management (RM) framework of our example cloud infrastructure, which enables the deployment of VMs with isolation enhanced SLAs.",
"title": ""
},
{
"docid": "d8d9bc717157d03c884962999c514033",
"text": "Topic models have been widely used to identify topics in text corpora. It is also known that purely unsupervised models often result in topics that are not comprehensible in applications. In recent years, a number of knowledge-based models have been proposed, which allow the user to input prior knowledge of the domain to produce more coherent and meaningful topics. In this paper, we go one step further to study how the prior knowledge from other domains can be exploited to help topic modeling in the new domain. This problem setting is important from both the application and the learning perspectives because knowledge is inherently accumulative. We human beings gain knowledge gradually and use the old knowledge to help solve new problems. To achieve this objective, existing models have some major difficulties. In this paper, we propose a novel knowledge-based model, called MDK-LDA, which is capable of using prior knowledge from multiple domains. Our evaluation results will demonstrate its effectiveness.",
"title": ""
},
{
"docid": "eacf6862ee299f933d5ae73612e40223",
"text": "Purpose of this paper is to approach and solve Sudoku as a Constraint Satisfaction Problem and to compare various heuristics and their effects on solving the problem. Heuristics compared include Backtracking with Forward Checking and Minimum Remaining Value, Arc Consistency, and Arc Consistency Pre-Processing.",
"title": ""
},
{
"docid": "4c67d3686008e377220314323a35eecb",
"text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"title": ""
},
{
"docid": "5ec64c4a423ccd32a5c1ceb918e3e003",
"text": "The leading edge (approximately 1 microgram) of lamellipodia in Xenopus laevis keratocytes and fibroblasts was shown to have an extensively branched organization of actin filaments, which we term the dendritic brush. Pointed ends of individual filaments were located at Y-junctions, where the Arp2/3 complex was also localized, suggesting a role of the Arp2/3 complex in branch formation. Differential depolymerization experiments suggested that the Arp2/3 complex also provided protection of pointed ends from depolymerization. Actin depolymerizing factor (ADF)/cofilin was excluded from the distal 0.4 micrometer++ of the lamellipodial network of keratocytes and in fibroblasts it was located within the depolymerization-resistant zone. These results suggest that ADF/cofilin, per se, is not sufficient for actin brush depolymerization and a regulatory step is required. Our evidence supports a dendritic nucleation model (Mullins, R.D., J.A. Heuser, and T.D. Pollard. 1998. Proc. Natl. Acad. Sci. USA. 95:6181-6186) for lamellipodial protrusion, which involves treadmilling of a branched actin array instead of treadmilling of individual filaments. In this model, Arp2/3 complex and ADF/cofilin have antagonistic activities. Arp2/3 complex is responsible for integration of nascent actin filaments into the actin network at the cell front and stabilizing pointed ends from depolymerization, while ADF/cofilin promotes filament disassembly at the rear of the brush, presumably by pointed end depolymerization after dissociation of the Arp2/3 complex.",
"title": ""
},
{
"docid": "1c01d2d8d9a11fa71b811a5afbfc0250",
"text": "This paper describes an interactive tour-guide robot, whic h was successfully exhibited in a Smithsonian museum. During its two weeks of operation, the robot interacted with more than 50,000 people, traversing more than 44km. Our approach specifically addresses issues such as safe navigation in unmodified and dynamic environments, and shortterm human-robot interaction.",
"title": ""
},
{
"docid": "03b8136e2ca033f42d497844d362813c",
"text": "We present a new approach for performing high-quality edge-preserving filtering of images and videos in real time. Our solution is based on a transform that defines an isometry between curves on the 2D image manifold in 5D and the real line. This transform preserves the geodesic distance between points on these curves, adaptively warping the input signal so that 1D edge-preserving filtering can be efficiently performed in linear time. We demonstrate three realizations of 1D edge-preserving filters, show how to produce high-quality 2D edge-preserving filters by iterating 1D-filtering operations, and empirically analyze the convergence of this process. Our approach has several desirable features: the use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. We demonstrate the versatility of our domain transform and edge-preserving filters on several real-time image and video processing tasks including edge-preserving filtering, depth-of-field effects, stylization, recoloring, colorization, detail enhancement, and tone mapping.",
"title": ""
},
{
"docid": "d06dc916942498014f9d00498c1d1d1f",
"text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦",
"title": ""
},
{
"docid": "25eedd2defb9e0a0b22e44195a4b767b",
"text": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.",
"title": ""
},
{
"docid": "2d6718172b83ef2a109f91791af6a0c3",
"text": "BACKGROUND & AIMS\nWe previously established long-term culture conditions under which single crypts or stem cells derived from mouse small intestine expand over long periods. The expanding crypts undergo multiple crypt fission events, simultaneously generating villus-like epithelial domains that contain all differentiated types of cells. We have adapted the culture conditions to grow similar epithelial organoids from mouse colon and human small intestine and colon.\n\n\nMETHODS\nBased on the mouse small intestinal culture system, we optimized the mouse and human colon culture systems.\n\n\nRESULTS\nAddition of Wnt3A to the combination of growth factors applied to mouse colon crypts allowed them to expand indefinitely. Addition of nicotinamide, along with a small molecule inhibitor of Alk and an inhibitor of p38, were required for long-term culture of human small intestine and colon tissues. The culture system also allowed growth of mouse Apc-deficient adenomas, human colorectal cancer cells, and human metaplastic epithelia from regions of Barrett's esophagus.\n\n\nCONCLUSIONS\nWe developed a technology that can be used to study infected, inflammatory, or neoplastic tissues from the human gastrointestinal tract. These tools might have applications in regenerative biology through ex vivo expansion of the intestinal epithelia. Studies of these cultures indicate that there is no inherent restriction in the replicative potential of adult stem cells (or a Hayflick limit) ex vivo.",
"title": ""
},
{
"docid": "b21e817d95b11119b9dbafca89a69262",
"text": "This paper identifies and analyzes BitCoin features which may facilitate Bitcoin to become a global currency, as well as characteristics which may impede the use of BitCoin as a medium of exchange, a unit of account and a store of value, and compares BitCoin with standard currencies with respect to the main functions of money. Among all analyzed BitCoin features, the extreme price volatility stands out most clearly compared to standard currencies. In order to understand the reasons for such extreme price volatility, we attempt to identify drivers of BitCoin price formation and estimate their importance econometrically. We apply time-series analytical mechanisms to daily data for the 2009-2014 period. Our estimation results suggest that BitCoin attractiveness indicators are the strongest drivers of BitCoin price followed by market forces. In contrast, macro-financial developments do not determine BitCoin price in the long-run. Our findings suggest that as long as BitCoin price will be mainly driven by speculative investments, BitCoin will not be able to compete with standard currencies.",
"title": ""
},
{
"docid": "9ba1b3b31d077ad9a8b05e3736cb8716",
"text": "This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on handcrafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. Using a frame by frame labeling, we obtain nearly state-of-the-art performance on the NYU-v2 depth dataset with an accuracy of 64.5%. We then show that the labeling can be further improved by exploiting the temporal consistency in the video sequence of the scene. To that goal, we present a method producing temporally consistent superpixels from a streaming video. Among the different methods producing superpixel segmentations of an image, the graph-based approach of Felzenszwalb and Huttenlocher is broadly employed. One of its interesting properties is that the regions are computed in a greedy manner in quasi-linear time by using a minimum spanning tree. In a framework exploiting minimum spanning trees all along, we propose an efficient video segmentation approach that computes temporally consistent pixels in a causal manner, filling the need for causal and real-time applications. We illustrate the labeling of indoor scenes in video sequences that could be processed in real-time using appropriate hardware such as an FPGA.",
"title": ""
},
{
"docid": "e44d7f7668590726def631c5ec5f5506",
"text": "Today thanks to low cost and high performance DSP's, Kalman filtering (KF) becomes an efficient candidate to avoid mechanical sensors in motor control. We present in this work experimental results by using a steady state KF method to estimate the speed and rotor position for hybrid stepper motor. With this method the computing time is reduced. The Kalman gain is pre-computed from numerical simulation and introduced as a constant in the real time algorithm. The load torque is also on-line estimated by the same algorithm. At start-up the initial rotor position is detected by the impulse current method.",
"title": ""
},
{
"docid": "f4617250b5654a673219d779952db35f",
"text": "Convolutional neural network (CNN) models have achieved tremendous success in many visual detection and recognition tasks. Unfortunately, visual tracking, a fundamental computer vision problem, is not handled well using the existing CNN models, because most object trackers implemented with CNN do not effectively leverage temporal and contextual information among consecutive frames. Recurrent neural network (RNN) models, on the other hand, are often used to process text and voice data due to their ability to learn intrinsic representations of sequential and temporal data. Here, we propose a novel neural network tracking model that is capable of integrating information over time and tracking a selected target in video. It comprises three components: a CNN extracting best tracking features in each video frame, an RNN constructing video memory state, and a reinforcement learning (RL) agent making target location decisions. The tracking problem is formulated as a decision-making process, and our model can be trained with RL algorithms to learn good tracking policies that pay attention to continuous, inter-frame correlation and maximize tracking performance in the long run. We compare our model with an existing neural-network based tracking method and show that the proposed tracking approach works well in various scenarios by performing rigorous validation experiments on artificial video sequences with ground truth. To the best of our knowledge, our tracker is the first neural-network tracker that combines convolutional and recurrent networks with RL algorithms.",
"title": ""
},
{
"docid": "45f120b05b3c48cd95d5dd55031987cb",
"text": "n engl j med 359;6 www.nejm.org august 7, 2008 628 From the Department of Medicine (O.O.F., E.S.A.) and the Division of Infectious Diseases (P.A.M.), Johns Hopkins Bayview Medical Center, Johns Hopkins School of Medicine, Baltimore; the Division of Infectious Diseases (D.R.K.) and the Division of General Medicine (S.S.), University of Michigan Medical School, Ann Arbor; and the Department of Veterans Affairs Health Services Research and Development Center of Excellence, Ann Arbor, MI (S.S.). Address reprint requests to Dr. Antonarakis at the Johns Hopkins Bayview Medical Center, Department of Medicine, B-1 North, 4940 Eastern Ave., Baltimore, MD 21224, or at eantona1@ jhmi.edu.",
"title": ""
},
{
"docid": "d2fb10bdbe745ace3a2512ccfa414d4c",
"text": "In cloud computing environment, especially in big data era, adversary may use data deduplication service supported by the cloud service provider as a side channel to eavesdrop users' privacy or sensitive information. In order to tackle this serious issue, in this paper, we propose a secure data deduplication scheme based on differential privacy. The highlights of the proposed scheme lie in constructing a hybrid cloud framework, using convergent encryption algorithm to encrypt original files, and introducing differential privacy mechanism to resist against the side channel attack. Performance evaluation shows that our scheme is able to effectively save network bandwidth and disk storage space during the processes of data deduplication. Meanwhile, security analysis indicates that our scheme can resist against the side channel attack and related files attack, and prevent the disclosure of privacy information.",
"title": ""
},
{
"docid": "b5b61c9bc2889ca7442d53a853bbe4ab",
"text": "This paper presents a novel switching-converter-free ac–dc light-emitting diode (LED) driver with low-frequency-flicker reduction for general lighting applications. The proposed driving solution can minimize the system size as it enables the monolithic integration of the controller and power transistors while both the bulky off-chip electrolytic capacitors and magnetics are eliminated. Moreover, the driver can effectively reduce the harmful optical flicker at the double-line-frequency by employing a novel quasi-constant power control scheme while maintaining high efficiency and a good power factor (PF). The proposed driver is implemented with a single controller integrated circuit chip, which includes the controller and high-voltage power transistors, and the off-chip diode bridge and valley-fill circuit. The chip is fabricated with a 0.35- $\\mu \\text{m}$ 120-V high-voltage CMOS process and occupies 1.85 mm2. The driver can provide up to 7.8-W power to the LED and achieves 87.6% peak efficiency and an over 0.925 PF with only 17.3% flicker from a 110-Vac 60-Hz input.",
"title": ""
}
] |
scidocsrr
|
c6830a797a70bfc247f11f5836b017ee
|
The effects of handwriting experience on functional brain development in pre-literate children
|
[
{
"docid": "a39c0db041f31370135462af467426ed",
"text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.",
"title": ""
}
] |
[
{
"docid": "a671eda59afa0c1210e042209e3cb084",
"text": "BACKGROUND\nOutpatient Therapeutic feeding Program (OTP) brings the services for management of Severe Acute Malnutrition (SAM) closer to the community by making services available at decentralized treatment points within the primary health care settings, through the use of ready-to-use therapeutic foods, community outreach and mobilization. Little is known about the program outcomes. This study revealed the levels of program outcome indictors and determinant factors to recovery rate.\n\n\nMETHODS\nA retrospective cohort study was conducted on 628 children who had been managed for SAM under OTP from April/2008 to January/2012. The children were selected using systematic random sampling from 12 health posts and 4 health centers. The study relied on information of demographic characteristics, anthropometries, Plumpy'Nut, medical problems and routine medications intakes. The results were estimated using Kaplan-Meier survival curves, log-rank test and Cox-regression.\n\n\nRESULTS\nThe recovery, defaulter, mortality and weight gain rates were 61.78%, 13.85%, 3.02% and 5.23 gm/kg/day, respectively. Routine medications were administered partially and children with medical problems were managed inappropriately under the program. As a child consumed one more sachet of Plumpy'Nut, the recovery rate from SAM increased by 4% (HR = 1.04, 95%-CI = 1.03, 1.05, P<0.001). The adjusted hazard ratios to recovery of children with diarrhea, appetite loss with Plumpy'Nut and failure to gain weight were 2.20 (HR = 2.20, 95%-CI = 1.31, 3.41, P = 0.001), 4.49 (HR = 1.74, 95%-CI = 1.07, 2.83, P = 0.046) and 3.88 (HR = 1.95, 95%-CI = 1.17, 3.23, P<0.001), respectively. Children who took amoxicillin and de-worming had 95% (HR = 1.95, 95%-CI = 1.17, 3.23) and 74% (HR = 1.74, 95%-CI = 1.07, 2.83) more probability to recover from SAM as compared to those who didn't take them.\n\n\nCONCLUSIONS\nThe OTP was partially successful. Management of children with comorbidities under the program and partial administration of routine drugs were major threats for the program effectiveness. The stakeholders should focus on creating the capacity of the OTP providers on proper management of SAM to achieve fully effective program.",
"title": ""
},
{
"docid": "c514cb2acdf18fc4d64dc0df52d09d51",
"text": "Android introduced the dynamic code loading (DCL) mechanism to allow for code reuse, to achieve extensibility, to enable updating functionalities, or to boost application start-up performance. In spite of its wide adoption by developers, previous research has shown that the secure implementation of DCL-based functionality is challenging, often leading to remote code injection vulnerabilities. Unfortunately, previous attempts to address this problem by both the academic and Android developers communities are affected by either practicality or completeness issues, and, in some cases, are affected by severe vulnerabilities.\n In this paper, we propose, design, implement, and test Grab 'n Run, a novel code verification protocol and a series of supporting libraries, APIs, and tools, that address the problem by abstracting away from the developer many of the challenging implementation details. Grab 'n Run is designed to be practical: Among its tools, it provides a drop-in library, which requires no modifications to the Android framework or the underlying Dalvik/ART runtime, is very similar to the native API, and most code can be automatically rewritten to use it. Grab 'n Run also contains an application-rewriting tool, which allows to easily port legacy or third-party applications to use the secure APIs developed in this work.\n We evaluate the Grab 'n Run library with a user study, obtaining very encouraging results in vulnerability reduction, ease of use, and speed of development. We also show that the performance overhead introduced by our library is negligible. For the benefit of the security of the Android ecosystem, we released Grab 'n Run as open source.",
"title": ""
},
{
"docid": "e0b8b4c2431b92ff878df197addb4f98",
"text": "Malware classification is a critical part of the cybersecurity. Traditional methodologies for the malware classification typically use static analysis and dynamic analysis to identify malware. In this paper, a malware classification methodology based on its binary image and extracting local binary pattern (LBP) features is proposed. First, malware images are reorganized into 3 by 3 grids which are mainly used to extract LBP feature. Second, the LBP is implemented on the malware images to extract features in that it is useful in pattern or texture classification. Finally, Tensorflow, a library for machine learning, is applied to classify malware images with the LBP feature. Performance comparison results among different classifiers with different image descriptors such as GIST, a spatial envelope, and the LBP demonstrate that our proposed approach outperforms others.",
"title": ""
},
{
"docid": "f8fb8f9cd9efd6aefd60950b257c4abd",
"text": "The development of a 12-way X-band all-waveguide radial divider/combiner is presented. The radial combiner is comprised of three parts: a center feed, a radial line, and peripheral waveguide ports. The center feed is comprised of two sections: a rectangular waveguide section and a mode transducer section. The latter is a circular waveguide fed by four-way in-phase combiner to convert the rectangular waveguide TE10 mode to a TE10 circular waveguide mode for in-phase feeding of all peripheral ports. For design evaluation, the 12-way combiner was built and tested but also two back-to-back test fixtures, one for the mode transducer and the second for the radial combiner were fabricated and tested as well. The measured insertion loss and phase imbalance of the combiner over a 10% operating bandwidth are less than 0.35 dB and ±5°, respectively. The structure is suitable for high power and should handle few kilowatts.",
"title": ""
},
{
"docid": "de333f099bad8a29046453e099f91b84",
"text": "Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the high-frequency trading, forecasting for trading purposes is even a more challenging task, since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale limit order book data set show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.",
"title": ""
},
{
"docid": "6a4595e71ad1c4e6196f17af20c8c1ef",
"text": "We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminatorD spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse. These in turn help G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D . Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.",
"title": ""
},
{
"docid": "eabb9e04ff7609bf6754431b9ce6718f",
"text": "Electric phenomena play an important role in biophysics. Bioelectric processes control the ion transport processes across membranes, and are the basis for information transfer along neurons. These electrical effects are generally triggered by chemical processes. However, it is also possible to control such cell functions and transport processes by applying pulsed electric fields. This area of bioengineering, bioelectrics, offers new applications for pulsed power technology. One such application is prevention of biofouling, an effect that is based on reversible electroporation of cell membranes. Pulsed electric fields of several kV/cm amplitude and submicrosecond duration have been found effective in preventing the growth of aquatic nuisance species on surfaces. Reversible electroporation is also used for medical applications, e.g. for delivery of chemotherapeutic drugs into tumor cells, for gene therapy, and for transdermal drug delivery. Higher electric fields cause irreversible membrane damage. Pulses in the microsecond range with electric field intensities in the tens of kV/cm are being used for bacterial decontamination of water and liquid food. A new type of field-cell interaction, \"Intracellular Electromanipulation\", by means of nanosecond pulses at electric fields exceeding 50 kV/cm has been recently added to known bioelectric effects. It is based on capacitive coupling to cell substructures, has therefore the potential to affect transport processes across subcellular membranes, and may be used for gene transfer into cell nuclei. There are also indications that it triggers intracellular processes, such as programmed cell death, an effect, which can be used for cancer treatment. In order to generate the required electric fields for these processes, high voltage, high current sources are required. The pulse duration needs to be short to prevent thermal effects. Pulse power technology is the enabling technology for bioelectrics. The field of bioelectrics, therefore opens up a new research area for pulse power engineers, with fascinating applications in biology and medicine.",
"title": ""
},
{
"docid": "9c11facbe1749ca3b8733a45741ae4c3",
"text": "The robotics literature of the last two decades contains many important advances in the control of flexible joint robots. This is a survey of these advances and an assessment for future developments, concentrated mostly on the control issues of flexible joint robots.",
"title": ""
},
{
"docid": "643599f9b0dcfd270f9f3c55567ed985",
"text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.",
"title": ""
},
{
"docid": "fcca1f2fea2534818c2bbf1ba8c9bf97",
"text": "The world is increasingly going green in its energy use. Wind power is a green renewable source of energy that can compete effectively with fossil fuel as a generator of power in the electricity market. For this effective competion, the production cost must be comparable to that of fossil fuels or other sources of energy. The initial capital investment in wind power goes to machine and the supporting infrastructure. Any factors that lead to decrease in cost of energy such as turbine design, construction and operation are key to making wind power competitive as an alternative source of energy. A mathematical model of wind turbine is essential in the understanding of the behaviour of the wind turbine over its region of operation because it allows for the development of comprehensive control algorithms that aid in optimal operation of a wind turbine. Modelling enables control of wind turbine’s performance. This paper attempts to address part or whole of these general objectives of wind turbine modelling through examination of power coefficient parameter. Model results will be beneficial to designers and researchers of new generation turbines who can utilize the information to optimize the design of turbines and minimize generation costs leading 4528 A. W. Manyonge, R. M. Ochieng, F. N. Onyango and J. M. Shichikha to decrease in cost of wind energy and hence, making it an economically viable alternative source of energy. Mathematics Subject Classification: 65C20",
"title": ""
},
{
"docid": "f16f8803a2aa1e08d449de477d3568d5",
"text": "Polyphenols represent a group of chemical substances common in plants, structurally characterized by the presence of one or more phenol units. Polyphenols are the most abundant antioxidants in human diets and the largest and best studied class of polyphenols is flavonoids, which include several thousand compounds. Numerous studies confirm that they exert a protective action on human health and are key components of a healthy and balanced diet. Epidemiological studies correlate flavonoid intake with a reduced incidence of chronic diseases, such as cardiovascular disease, diabetes and cancer. The involvement of reactive oxygen species (ROS) in the etiology of these degenerative conditions has suggested that phytochemicals showing antioxidant activity may contribute to the prevention of these pathologies. The present review deals with phenolic compounds in plants and reports on recent studies. Moreover, the present work includes information on the relationships between the consumption of these compounds, via feeding, and risk of disease occurrence, i.e. the effect on human health. Results obtained on herbs, essential oils, from plants grown in tropical, subtropical and temperate regions, were also reported.",
"title": ""
},
{
"docid": "2165c5f8990234a862bf2dece88ea6eb",
"text": "This paper summarizes some recent advances on a set of tasks related to the processing of singing using state-of-the-art deep learning techniques. We discuss their achievements in terms of accuracy and sound quality, and the current challenges, such as availability of data and computing resources. We also discuss the impact that these advances do and will have on listeners and singers when they are integrated in commercial applications.",
"title": ""
},
{
"docid": "09a236e2c9e7be6a879ab5ca84e426c9",
"text": "A foot database comprising 3D foot shapes and footwear fitting reports of more than 300 participants is presented. It was primarily acquired to study footwear fitting, though it can also be used to analyse anatomical features of the foot. In fact, we present a technique for automatic detection of several foot anatomical landmarks, together with some empirical results.",
"title": ""
},
{
"docid": "97a3c599c7410a0e12e1784585260b95",
"text": "This research focuses on 3D printed carbon-epoxy composite components in which the reinforcing carbon fibers have been preferentially aligned during the micro-extrusion process. Most polymer 3D printing techniques use unreinforced polymers. By adding carbon fiber as a reinforcing material, properties such as mechanical strength, electrical conductivity, and thermal conductivity can be greatly enhanced. However, these properties are significantly influenced by the degree of fiber alignment (or lack thereof). A Design of Experiments (DOE) approach was used to identify significant process parameters affecting preferential fiber alignment in the micro-extrusion process. A 2D Fast Fourier Transform (FFT) was used with ImageJ software to quantify the degree of fiber alignment in micro-extruded carbonepoxy pastes. Based on analysis of experimental results, tensile test samples were printed with fibers aligned parallel and perpendicular to the tensile axis. A standard test method for tensile properties of plastic revealed that the 3D printed test coupons with fibers aligned parallel to the tensile axis were significantly better in tensile strength and modulus. Results of this research can be used to 3D print components with locally controlled fiber alignment that is difficult to achieve via conventional composite manufacturing techniques.",
"title": ""
},
{
"docid": "a0c6b1817a08d1be63dff9664852a6b4",
"text": "Despite years of HCI research on digital technology in museums, it is still unclear how different interactions impact on visitors'. A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences, behavioural change, and the appeal of mobiles in museums. 76 participants used all three interaction modes and gave their opinions in a questionnaire; participants interaction was also observed. The results show the phone is the most disliked interaction mode while tangible interaction (smart card and replica combined) is the most liked. Preference for the phone favour mobility to the detriment of engagement with the exhibition. Different behaviours when interacting with the phone or the tangibles where observed. The personal visiting style appeared to be only marginally affected by the device. Visitors also expect museums to provide the phones against the current trend of developing apps in a \"bring your own device\" approach.",
"title": ""
},
{
"docid": "46cabd836b416be86a18262bc58e9dec",
"text": "Encrypting data on client-side before uploading it to a cloud storage is essential for protecting users' privacy. However client-side encryption is at odds with the standard practice of deduplication. Reconciling client-side encryption with cross-user deduplication is an active research topic. We present the first secure cross-user deduplication scheme that supports client-side encryption without requiring any additional independent servers. Interestingly, the scheme is based on using a PAKE (password authenticated key exchange) protocol. We demonstrate that our scheme provides better security guarantees than previous efforts. We show both the effectiveness and the efficiency of our scheme, via simulations using realistic datasets and an implementation.",
"title": ""
},
{
"docid": "9cf0d6e811f7cdafe4316b49d060d192",
"text": "Medical imaging plays a central role in a vast range of healthcare practices. The usefulness of 3D visualizations has been demonstrated for many types of treatment planning. Nevertheless, full access to 3D renderings outside of the radiology department is still scarce even for many image-centric specialties. Our work stems from the hypothesis that this under-utilization is partly due to existing visualization systems not taking the prerequisites of this application domain fully into account. We have developed a medical visualization table intended to better fit the clinical reality. The overall design goals were two-fold: similarity to a real physical situation and a very low learning threshold. This paper describes the development of the visualization table with focus on key design decisions. The developed features include two novel interaction components for touch tables. A user study including five orthopedic surgeons demonstrates that the system is appropriate and useful for this application domain.",
"title": ""
},
{
"docid": "032526c7855e0895ae88748c309b21c0",
"text": "Amazon is a well-known online company that sells products such as books and music. It also tracks the purchasing patterns of a variety of groups including private corporations, government organizations, and geographic areas. Amazon defines each of these groups as a “purchase circle.” For each purchase circle, Amazon lists the bestselling items in the Books, Music, Video, DVDs, and Electronics product categories. Our objective is to create a dynamic visualization of Amazon’s purchase circles that focuses on looking at the Top 10 music titles and genres that are popular in selected U.S. cities. We present a visualization known as CityPrints, a dynamic query-based tool for producing color-coded visual representations of purchase circles data. CityPrints allows users to quickly compare popular titles in different U.S. cities, identify which music genres are popular in a given city, and rank cities according to how popular a given music genre is in that city.",
"title": ""
},
{
"docid": "42e25eaf06693b3544498d959a55bd1e",
"text": "A standard view of the semantics of natural language sentences or utterances is that a sentence has a particular logical structure and is assigned truth-conditional content on the basis of that structure. Such a semantics is assumed to be able to capture the logical properties of sentences, including necessary truth, contradiction and valid inference; our knowledge of these properties is taken to be part of our semantic competence as native speakers of the language. The following examples pose a problem for this view of semantics:",
"title": ""
},
{
"docid": "38984b625ac24137b23444f4bd53a312",
"text": "Presence Volume /, Number 3. Summer / 992 Reprinted from Espacios 23-24, 1955. © / 992 The Massachusetts Institute of Technology Pandemonium reigns supreme in the film industry. Ever)' studio is hastily converting to its own \"revolutionär)'\" system—Cinerama, Colorama, Panoramic Screen, Cinemascope, Three-D, and Stereophonic Sound. A dozen marquees in Time Square are luring customers into the realm of a \"sensational new experience.\" Everywhere we see the \"initiated\" holding pencils before the winked eyes of the \"uninitiated\" explaining the mysteries of 3-D. The critics are lining up pro and con concluding their articles profoundly with \"after all, it's the story that counts.\" Along with other filmgoers desiring orientation, I have been reading these articles and have sadly discovered that they reflect this confusion rather than illuminate it. It is apparent that the inability to cope with the problem stems from a refusal to adopt a wider frame of reference, and from a meager understanding of the place art has in life generally. All living things engage, on a higher or lower level, in a continuous cycle of orientation and action. For example, an animal on a mountain ledge hears a rumbling sound and sees an avalanche of rocks descending on it. It cries with",
"title": ""
}
] |
scidocsrr
|
d4de468ebe757a127d707df5c2bef80d
|
Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation
|
[
{
"docid": "88b0bdfb06e91f63d1930814388d0c9c",
"text": "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: //bitbucket.org/deeplab/deeplab-public.",
"title": ""
},
{
"docid": "b01fbfbe98960e81359c73009a06f5bb",
"text": "Multiple instance learning (MIL) can reduce the need for costly annotation in tasks such as semantic segmentation by weakening the required degree of supervision. We propose a novel MIL formulation of multi-class semantic segmentation learning by a fully convolutional network. In this setting, we seek to learn a semantic segmentation model from just weak image-level labels. The model is trained endto-end to jointly optimize the representation while disambiguating the pixel-image label assignment. Fully convolutional training accepts inputs of any size, does not need object proposal pre-processing, and offers a pixelwise loss map for selecting latent instances. Our multi-class MIL loss exploits the further supervision given by images with multiple labels. We evaluate this approach through preliminary experiments on the PASCAL VOC segmentation challenge.",
"title": ""
}
] |
[
{
"docid": "af2f9dd69e90ed3c61e09b5b53fa1cdb",
"text": "Cellular networks are one of the cornerstones of our information-driven society. However, existing cellular systems have been seriously challenged by the explosion of mobile data traffic, the emergence of machine-type communications, and the flourishing of mobile Internet services. In this article, we propose CONCERT, a converged edge infrastructure for future cellular communications and mobile computing services. The proposed architecture is constructed based on the concept of control/data (C/D) plane decoupling. The data plane includes heterogeneous physical resources such as radio interface equipment, computational resources, and software-defined switches. The control plane jointly coordinates physical resources to present them as virtual resources, over which software-defined services including communications, computing, and management can be deployed in a flexible manner. Moreover, we introduce new designs for physical resources placement and task scheduling so that CONCERT can overcome the drawbacks of the existing baseband-up centralization approach and better facilitate innovations in next-generation cellular networks. These advantages are demonstrated with application examples on radio access networks with C/D decoupled air interface, delaysensitive machine-type communications, and realtime mobile cloud gaming. We also discuss some fundamental research issues arising with the proposed architecture to illuminate future research directions.",
"title": ""
},
{
"docid": "2c597e49524d641ddfe1ec552bee2014",
"text": "This paper presents a fully integrated CMOS start-up circuit for a low voltage battery-less harvesting application. The proposed topology is based on a step-up charge pump using depletion transistors instead of enhancement transistors. With this architecture, we can obtain a self-starting voltage below the enhancement transistor's threshold due to its normally-on operation. The key advantages are the CMOS compatibility, inductor-less solution and no extra post-fabrication processing. The topology has been simulated in 0.18μm technology using a transistor-level model and has been compared to the traditional charge pump structure. The depletion-based voltage doubler charge pump enables operation from an input voltage as low as 250mV compared to 400mV in an enhancement-based one. The proposed topology can also achieve other conversion ratios such as 1:-1 inverter or 1:N step-up.",
"title": ""
},
{
"docid": "76669015c232bd5175ca296fc3d9ff2f",
"text": "In this paper, an optimal aggregation and counter-aggregation (drill-down) methodology is proposed on multidimensional data cube. The main idea is to aggregate on smaller cuboids after partitioning those depending on the cardinality of the individual dimensions. Based on the operations to make these partitions, a Galois Connection is identified for formal analysis that allow to guarantee the soundness of optimizations of storage space and time complexity for the abstraction and concretization functions defined on the lattice structure. Our contribution can be seen as an application to OLAP operations on multidimensional data model in the Abstract Interpretation framework.",
"title": ""
},
{
"docid": "9b646ef8c6054f9a4d85cf25e83d415c",
"text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed",
"title": ""
},
{
"docid": "f5be73d82f441b5f0d6011bbbec8b759",
"text": "Abnormal crowd behavior detection is an important research issue in computer vision. The traditional methods first extract the local spatio-temporal cuboid from video. Then the cuboid is described by optical flow or gradient features, etc. Unfortunately, because of the complex environmental conditions, such as severe occlusion, over-crowding, etc., the existing algorithms cannot be efficiently applied. In this paper, we derive the high-frequency and spatio-temporal (HFST) features to detect the abnormal crowd behaviors in videos. They are obtained by applying the wavelet transform to the plane in the cuboid which is parallel to the time direction. The high-frequency information characterize the dynamic properties of the cuboid. The HFST features are applied to the both global and local abnormal crowd behavior detection. For the global abnormal crowd behavior detection, Latent Dirichlet allocation is used to model the normal scenes. For the local abnormal crowd behavior detection, Multiple Hidden Markov Models, with an competitive mechanism, is employed to model the normal scenes. The comprehensive experiment results show that the speed of detection has been greatly improved using our approach. Moreover, a good accuracy has been achieved considering the false positive and false negative detection rates.",
"title": ""
},
{
"docid": "b8a681b6c928d8b84fa5f30154d5af85",
"text": "Medicine relies on the use of pharmacologically active agents (drugs) to manage and treat disease. However, drugs are not inherently effective; the benefit of a drug is directly related to the manner by which it is administered or delivered. Drug delivery can affect drug pharmacokinetics, absorption, distribution, metabolism, duration of therapeutic effect, excretion, and toxicity. As new therapeutics (e.g., biologics) are being developed, there is an accompanying need for improved chemistries and materials to deliver them to the target site in the body, at a therapeutic concentration, and for the required period of time. In this Perspective, we provide an historical overview of drug delivery and controlled release followed by highlights of four emerging areas in the field of drug delivery: systemic RNA delivery, drug delivery for localized therapy, oral drug delivery systems, and biologic drug delivery systems. In each case, we present the barriers to effective drug delivery as well as chemical and materials advances that are enabling the field to overcome these hurdles for clinical impact.",
"title": ""
},
{
"docid": "d75d453181293c92ec9bab800029e366",
"text": "For a majority of applications implemented today, the Intermediate Bus Architecture (IBA) has been the preferred power architecture. This power architecture has led to the development of the isolated, semi-regulated DC/DC converter known as the Intermediate Bus Converter (IBC). Fixed ratio Bus Converters that employ a new power topology known as the Sine Amplitude Converter (SAC) offer dramatic improvements in power density, noise reduction, and efficiency over the existing IBC products. As electronic systems continue to trend toward lower voltages with higher currents and as the speed of contemporary loads - such as state-of-the-art processors and memory - continues to increase, the power systems designer is challenged to provide small, cost effective and efficient solutions that offer the requisite performance. Traditional power architectures cannot, in the long run, provide the required performance. Vicor's Factorized Power Architecture (FPA), and the implementation of V·I Chips, provides a revolutionary new and optimal power conversion solution that addresses the challenge in every respect. The technology behind these power conversion engines used in the IBC and V·I Chips is analyzed and contextualized in a system perspective.",
"title": ""
},
{
"docid": "048c67f19bdb634e39e98296fd1107cb",
"text": "It has been suggested that music and speech maintain entirely dissociable mental processing systems. The current study, however, provides evidence that there is an overlap in the processing of certain shared aspects of the two. This study focuses on fundamental frequency (pitch), which is an essential component of melodic units in music and lexical and/or intonational units in speech. We hypothesize that extensive experience with the processing of musical pitch can transfer to a lexical pitch-processing domain. To that end, we asked nine English-speaking musicians and nine Englishspeaking non-musicians to identify and discriminate the four lexical tones of Mandarin Chinese. The subjects performed significantly differently on both tasks; the musicians identified the tones with 89% accuracy and discriminated them with 87% accuracy, while the non-musicians identified them with only 69% accuracy and discriminated them with 71% accuracy. These results provide counter-evidence to the theory of dissociation between music and speech processing.",
"title": ""
},
{
"docid": "f631cca2bd0c22f60af1d5f63a7522b5",
"text": "We introduce the problem of k-pattern set mining, concerned with finding a set of k related patterns under constraints. This contrasts to regular pattern mining, where one searches for many individual patterns. The k-pattern set mining problem is a very general problem that can be instantiated to a wide variety of well-known mining tasks including concept-learning, rule-learning, redescription mining, conceptual clustering and tiling. To this end, we formulate a large number of constraints for use in k-pattern set mining, both at the local level, that is, on individual patterns, and on the global level, that is, on the overall pattern set. Building general solvers for the pattern set mining problem remains a challenge. Here, we investigate to what extent constraint programming (CP) can be used as a general solution strategy. We present a mapping of pattern set constraints to constraints currently available in CP. This allows us to investigate a large number of settings within a unified framework and to gain insight in the possibilities and limitations of these solvers. This is important as it allows us to create guidelines in how to model new problems successfully and how to model existing problems more efficiently. It also opens up the way for other solver technologies.",
"title": ""
},
{
"docid": "06272b99e56db2cb79c336047268c064",
"text": "In this paper, we describe our proposed approach for participating in the Third Emotion Recognition in the Wild Challenge (EmotiW 2015). We focus on the sub-challenge of Audio-Video Based Emotion Recognition using the AFEW dataset. The AFEW dataset consists of 7 emotion groups corresponding to the 7 basic emotions. Each group includes multiple videos from movie clips with people acting a certain emotion. In our approach, we extract LBP-TOP-based video features, openEAR energy/spectral-based audio features, and CNN (convolutional neural network) based deep image features by fine-tuning a pre-trained model with extra emotion images from the web. For each type of features, we run a SVM grid search to find the best RBF kernel. Then multi-kernel learning is employed to combine the RBF kernels to accomplish the feature fusion and generate a fused RBF kernel. Running multi-class SVM classification, we achieve a 45.23% test accuracy on the AFEW dataset. We then apply a decision optimization method to adjust the label distribution closer to the ground truth, by setting offsets for some of the classifiers' prediction confidence score. By applying this modification, the test accuracy increases to 50.46%, which is a significant improvement comparing to the baseline accuracy 39.33% .",
"title": ""
},
{
"docid": "184596076bf83518c3cf3f693e62cad7",
"text": "High-K (HK) and Metal-Gate (MG) transistor reliability is very challenging both from the standpoint of introduction of new materials and requirement of higher field of operation for higher performance. In this paper, key and unique HK+MG intrinsic transistor reliability mechanisms observed on 32nm logic technology generation is presented. We'll present intrinsic reliability similar to or better than 45nm generation.",
"title": ""
},
{
"docid": "6f9bca88fbb59e204dd8d4ae2548bd2d",
"text": "As the biomechanical literature concerning softball pitching is evolving, there are no data to support the mechanics of softball position players. Pitching literature supports the whole kinetic chain approach including the lower extremity in proper throwing mechanics. The purpose of this project was to examine the gluteal muscle group activation patterns and their relationship with shoulder and elbow kinematics and kinetics during the overhead throwing motion of softball position players. Eighteen Division I National Collegiate Athletic Association softball players (19.2 ± 1.0 years; 68.9 ± 8.7 kg; 168.6 ± 6.6 cm) who were listed on the active playing roster volunteered. Electromyographic, kinematic, and kinetic data were collected while players caught a simulated hit or pitched ball and perform their position throw. Pearson correlation revealed a significant negative correlation between non-throwing gluteus maximus during the phase of maximum external rotation to maximum internal rotation (MIR) and elbow moments at ball release (r = −0.52). While at ball release, trunk flexion and rotation both had a positive relationship with shoulder moments at MIR (r = 0.69, r = 0.82, respectively) suggesting that the kinematic actions of the pelvis and trunk are strongly related to the actions of the shoulder during throwing.",
"title": ""
},
{
"docid": "067ec456d76cce7978b3d2f0c67269ed",
"text": "With the development of deep learning, the performance of hyperspectral image (HSI) classification has been greatly improved in recent years. The shortage of training samples has become a bottleneck for further improvement of performance. In this paper, we propose a novel convolutional neural network framework for the characteristics of hyperspectral image data called HSI-CNN, which can also provides ideas for the processing of one-dimensional data. Firstly, the spectral-spatial feature is extracted from a target pixel and its neighbors. Then, a number of one-dimensional feature maps, obtained by convolution operation on spectral-spatial features, are stacked into a two-dimensional matrix. Finally, the two-dimensional matrix considered as an image is fed into standard CNN. This is why we call it HSI-CNN. In addition, we also implements two depth network classification models, called HSI-CNN+XGBoost and HSI-CapsNet, in order to compare the performance of our framework. Experiments show that the performance of hyperspectral image classification is improved efficiently with HSI-CNN framework. We evaluate the model's performance using four popular HSI datasets, which are the Kennedy Space Center (KSC), Indian Pines (IP), Pavia University scene (PU) and Salinas scene (SA). As far as we concerned, the accuracy of HSI-CNN has kept pace with the state-of-art methods, which is 99.28%, 99.09%, 99.57%, 98.97% separately.",
"title": ""
},
{
"docid": "5275389fa8d15d5652cfda4e2afe389a",
"text": "In this article we develop a semantic typology of gradable predicates, with special emphasis on deverbal adjectives. We argue for the linguistic relevance of this typology by demonstrating that the distribution and interpretation of degreemodifiers is sensitive to its twomajor classificatory parameters: (1) whether a gradable predicate is associated with what we call an open or closed scale, and (2) whether the standard of comparison for the applicability of the predicate is absolute or relative to a context. We further show that the classification of an important subclass of adjectives within the typology is largely predictable. Specifically, the scale structure of a deverbal gradable adjective correlates either with the algebraic part structure of the event denoted by its source verb or with the part structure of the entities to which the adjective applies. These correlations underscore the fact that gradability is characteristic not only of adjectives but also of verbs and nouns, and that scalar properties are shared by categorially distinct but derivationally related expressions.*",
"title": ""
},
{
"docid": "f5182ad077b1fdaa450d16544d63f01b",
"text": "This article paves the knowledge about the next generation Bluetooth Standard-BT 5 that will bring some mesmerizing upgrades including increased range, speed, and broadcast messaging capacity. Further, three relevant queries such as what is better about BT 5, why does that matter, and how will it affect IoT have been explained to gather related information so that developers, practitioners, and naive people could formulate BT 5 into IoT based applications while assimilating the need of short range communication in true sense.",
"title": ""
},
{
"docid": "e59ba6bdbea44811a957ca3fb42332c5",
"text": "Myopathies are gaining the attention of poultry meat producers globally. White Striping (WS) is a condition characterized by the occurrence of white striations parallel to muscle fibers on breast, thigh, and tender muscles of broilers, while Woody Breast (WB) imparts tougher consistency to raw breast fillets. Histologically, both conditions have been characterized with myodegeneration and necrosis, fibrosis, lipidosis, and regenerative changes. The occurrence of these modern myopathies has been associated with increased growth rate in birds. The severity of the myopathies can adversely affect consumer acceptance of raw cut up parts and/or quality of further processed poultry meat products, resulting in huge economic loss to the industry. Even though gross and/or histologic characteristics of modern myopathies are similar to some of the known conditions, such as hereditary muscular dystrophy, nutritional myopathy, toxic myopathies, and marbling, WS and WB could have a different etiology. As a result, there is a need for future studies to identify markers for WS and WB in live birds and genetic, nutritional, and/or management strategies to alleviate the condition.",
"title": ""
},
{
"docid": "45f2599c6a256b55ee466c258ba93f48",
"text": "Functional turnover of transcription factor binding sites (TFBSs), such as whole-motif loss or gain, are common events during genome evolution. Conventional probabilistic phylogenetic shadowing methods model the evolution of genomes only at nucleotide level, and lack the ability to capture the evolutionary dynamics of functional turnover of aligned sequence entities. As a result, comparative genomic search of non-conserved motifs across evolutionarily related taxa remains a difficult challenge, especially in higher eukaryotes, where the cis-regulatory regions containing motifs can be long and divergent; existing methods rely heavily on specialized pattern-driven heuristic search or sampling algorithms, which can be difficult to generalize and hard to interpret based on phylogenetic principles. We propose a new method: Conditional Shadowing via Multi-resolution Evolutionary Trees, or CSMET, which uses a context-dependent probabilistic graphical model that allows aligned sites from different taxa in a multiple alignment to be modeled by either a background or an appropriate motif phylogeny conditioning on the functional specifications of each taxon. The functional specifications themselves are the output of a phylogeny which models the evolution not of individual nucleotides, but of the overall functionality (e.g., functional retention or loss) of the aligned sequence segments over lineages. Combining this method with a hidden Markov model that autocorrelates evolutionary rates on successive sites in the genome, CSMET offers a principled way to take into consideration lineage-specific evolution of TFBSs during motif detection, and a readily computable analytical form of the posterior distribution of motifs under TFBS turnover. On both simulated and real Drosophila cis-regulatory modules, CSMET outperforms other state-of-the-art comparative genomic motif finders.",
"title": ""
},
{
"docid": "0c529c9a9f552f89e0c0ad3e000cbd37",
"text": "In this article, I introduce an emotion paradox: People believe that they know an emotion when they see it, and as a consequence assume that emotions are discrete events that can be recognized with some degree of accuracy, but scientists have yet to produce a set of clear and consistent criteria for indicating when an emotion is present and when it is not. I propose one solution to this paradox: People experience an emotion when they conceptualize an instance of affective feeling. In this view, the experience of emotion is an act of categorization, guided by embodied knowledge about emotion. The result is a model of emotion experience that has much in common with the social psychological literature on person perception and with literature on embodied conceptual knowledge as it has recently been applied to social psychology.",
"title": ""
},
{
"docid": "db04a402e0c7d93afdaf34c0d55ded9a",
"text": " Drowsiness and increased tendency to fall asleep during daytime is still a generally underestimated problem. An increased tendency to fall asleep limits the efficiency at work and substantially increases the risk of accidents. Reduced alertness is difficult to assess, particularly under real life settings. Most of the available measuring procedures are laboratory-oriented and their applicability under field conditions is limited; their validity and sensitivity are often a matter of controversy. The spontaneous eye blink is considered to be a suitable ocular indicator for fatigue diagnostics. To evaluate eye blink parameters as a drowsiness indicator, a contact-free method for the measurement of spontaneous eye blinks was developed. An infrared sensor clipped to an eyeglass frame records eyelid movements continuously. In a series of sessions with 60 healthy adult participants, the validity of spontaneous blink parameters was investigated. The subjective state was determined by means of questionnaires immediately before the recording of eye blinks. The results show that several parameters of the spontaneous eye blink can be used as indicators in fatigue diagnostics. The parameters blink duration and reopening time in particular change reliably with increasing drowsiness. Furthermore, the proportion of long closure duration blinks proves to be an informative parameter. The results demonstrate that the measurement of eye blink parameters provides reliable information about drowsiness/sleepiness, which may also be applied to the continuous monitoring of the tendency to fall asleep.",
"title": ""
}
] |
scidocsrr
|
ed73ec251a124fff0f1e09adf0e5bab1
|
Intelligent Lighting Control for Vision-Based Robotic Manipulation
|
[
{
"docid": "936d92f1afcab16a9dfe24b73d5f986d",
"text": "Active vision techniques use programmable light sources, such as projectors, whose intensities can be controlled over space and time. We present a broad framework for fast active vision using Digital Light Processing (DLP) projectors. The digital micromirror array (DMD) in a DLP projector is capable of switching mirrors “on” and “off” at high speeds (10/s). An off-the-shelf DLP projector, however, effectively operates at much lower rates (30-60Hz) by emitting smaller intensities that are integrated over time by a sensor (eye or camera) to produce the desired brightness value. Our key idea is to exploit this “temporal dithering” of illumination, as observed by a high-speed camera. The dithering encodes each brightness value uniquely and may be used in conjunction with virtually any active vision technique. We apply our approach to five well-known problems: (a) structured light-based range finding, (b) photometric stereo, (c) illumination de-multiplexing, (d) high frequency preserving motion-blur and (e) separation of direct and global scene components, achieving significant speedups in performance. In all our methods, the projector receives a single image as input whereas the camera acquires a sequence of frames.",
"title": ""
}
] |
[
{
"docid": "8e8d7b2411fa0b0c19d745ce85fcec11",
"text": "Parallel distributed processing (PDP) architectures demonstrate a potentially radical alternative to the traditional theories of language processing that are based on serial computational models. However, learning complex structural relationships in temporal data presents a serious challenge to PDP systems. For example, automata theory dictates that processing strings from a context-free language (CFL) requires a stack or counter memory device. While some PDP models have been hand-crafted to emulate such a device, it is not clear how a neural network might develop such a device when learning a CFL. This research employs standard backpropagation training techniques for a recurrent neural network (RNN) in the task of learning to predict the next character in a simple deterministic CFL (DCFL). We show that an RNN can learn to recognize the structure of a simple DCFL. We use dynamical systems theory to identify how network states re ̄ ect that structure by building counters in phase space. The work is an empirical investigation which is complementary to theoretical analyses of network capabilities, yet original in its speci ® c con® guration of dynamics involved. The application of dynamical systems theory helps us relate the simulation results to theoretical results, and the learning task enables us to highlight some issues for understanding dynamical systems that process language with counters.",
"title": ""
},
{
"docid": "2fd7cc65c34551c90a72fc3cb4665336",
"text": "Generating natural language requires conveying content in an appropriate style. We explore two related tasks on generating text of varying formality: monolingual formality transfer and formality-sensitive machine translation. We propose to solve these tasks jointly using multi-task learning, and show that our models achieve state-of-the-art performance for formality transfer and are able to perform formality-sensitive translation without being explicitly trained on styleannotated translation examples.",
"title": ""
},
{
"docid": "d10e4740076aeba6441b74512e6993df",
"text": "Purpose – In recent decades, innovation management has changed. This article provides an overview of the changes that have taken place, focusing on innovation management in large companies, with the aim of explaining that innovation management has evolved toward a contextual approach, which it will explain and illustrate using two cases. Design/methodology/approach – The basic approach in this article is to juxtapose a review of existing literature regarding trends in innovation management and research and development (R&D) management generations, and empirical data about actual approaches to innovation. Findings – The idea that there is a single mainstream innovation approach does not match with the (successful) approaches companies have adopted. What is required is a contextual approach. However, research with regard to such an approach is fragmented. Decisions to adapt the innovation management approach to the newness of an innovation or the type of organization respectively have thus far been investigated separately. Research limitations/implications – An integrated approach is needed to support the intuitive decisions managers make to tailor their innovation approach to the type of innovation, organization(s), industry and country/culture. Originality/value – The practical and scientific value of this paper is that is describes an integrated approach to contextual innovation.",
"title": ""
},
{
"docid": "9a13a2baf55676f82457f47d3929a4e7",
"text": "Humans are a cultural species, and the study of human psychology benefits from attention to cultural influences. Cultural psychology's contributions to psychological science can largely be divided according to the two different stages of scientific inquiry. Stage 1 research seeks cultural differences and establishes the boundaries of psychological phenomena. Stage 2 research seeks underlying mechanisms of those cultural differences. The literatures regarding these two distinct stages are reviewed, and various methods for conducting Stage 2 research are discussed. The implications of culture-blind and multicultural psychologies for society and intergroup relations are also discussed.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "aa98ed0384fe6161d044cb3aa2225a98",
"text": "Article history: Received 22 December 2013 Received in revised form 15 July 2014 Available online 26 September 2015 Dedicated to the Memory of Mary Ellen Rudin, a Great Person and a Great Mathematician MSC: 54H11 54A25 54B05",
"title": ""
},
{
"docid": "9753b3ad7eac092f45035c941b59ebcb",
"text": "Since the metabolic disorder may be the high risk that contribute to the progress of Alzheimer's disease (AD). Overtaken of High-fat, high-glucose or high-cholesterol diet may hasten the incidence of AD in later life, due to the metabolic dysfunction. But the metabolism of lipid in brain and the exact effect of lipid to brain or to the AD's pathological remain controversial. Here we summarize correlates of lipid metabolism and AD to provide more foundation for the daily nursing of AD sensitive patients.",
"title": ""
},
{
"docid": "680fa29fcd41421a2b3b235555f0cb91",
"text": "Brown adipose tissue (BAT) is the main site of adaptive thermogenesis and experimental studies have associated BAT activity with protection against obesity and metabolic diseases, such as type 2 diabetes mellitus and dyslipidaemia. Active BAT is present in adult humans and its activity is impaired in patients with obesity. The ability of BAT to protect against chronic metabolic disease has traditionally been attributed to its capacity to utilize glucose and lipids for thermogenesis. However, BAT might also have a secretory role, which could contribute to the systemic consequences of BAT activity. Several BAT-derived molecules that act in a paracrine or autocrine manner have been identified. Most of these factors promote hypertrophy and hyperplasia of BAT, vascularization, innervation and blood flow, processes that are all associated with BAT recruitment when thermogenic activity is enhanced. Additionally, BAT can release regulatory molecules that act on other tissues and organs. This secretory capacity of BAT is thought to be involved in the beneficial effects of BAT transplantation in rodents. Fibroblast growth factor 21, IL-6 and neuregulin 4 are among the first BAT-derived endocrine factors to be identified. In this Review, we discuss the current understanding of the regulatory molecules (the so-called brown adipokines or batokines) that are released by BAT that influence systemic metabolism and convey the beneficial metabolic effects of BAT activation. The identification of such adipokines might also direct drug discovery approaches for managing obesity and its associated chronic metabolic diseases.",
"title": ""
},
{
"docid": "14d2f63cb324b3013c5fbf138a7f9dff",
"text": "THISARTICLE WILL EXPLORE THE ROLE OF THE LIBRARIAN arid of the service perspective in the digital library environment. The focus of the article will be limited to the topic of librarian/user collaboration where the librarian and user are not co-located. The role of the librarian will be explored as outlined in the literature on digital libraries, some studies will be examined that attempt to put the service perspective in the digital library, survey existing initiatives in providing library services electronically, and outline potential service perspectives for the digital library. INTRODUCTION The digital library offers users the prospect of access to electronic resources at their convenience temporally and spatially. Users do not have to be concerned with the physical library’s hours of operation, and users do not have to go physically to the library to access resources. Much has been written about the digital library. The focus of most studies, papers, and articles has been on the technology or on the types of resources offered. Human interaction in the digital library is discussed far less frequently. One would almost get the impression that the service tradition of the physical library will be unnecessary and redundant in the digital library environment. Bernie Sloan, Office for Planning and Budget, Room 338, 506 S. Wright Street, University of Illinois, Urbana, IL 61801 LIBRARY TRENDS, Vol. 47, No. 1, Summer 1998,pp. 117-143 01998 The Board of’Trustees, University of Illinois 118 I.IBRARY TRENDS/SUMMER 1998 DEFINING LIBRARY-WHERE SERVICE FITI N ? THE DIGITA DOES Defining the digital library is an interesting, but somewhat daunting, task. There is no shortage of proposed definitions. One would think that there would be some commonly accepted and fairly straightforward standard definition, but there does not appear to be. Rather, there are many. And one common thread among all these definitions is a heavy emphasis on rrsourcesand an apparent lack of emphasis on librarians and the services they provide. The Association of Research Libraries (ARL) notes: “There are many definitions of a ‘digital library’. . . .Terms such as ‘electronic library’ and ‘virtual library’ are often used synonymously” (Association of Research Libraries, 1995). The AlU relies on Karen Drabenstott’s (1994) Analytical Reuiai~ojthe Library ofthe Future for its inspiration. In defining the digital library, Drabenstott offers fourteen definitions published between 1987 and 1993. The commonalties of these different definitions are summarized as follows: The digital library is not a single entity. The digital library requires technology to link the resources of many libraries and information services. Transparent to end-users are the linkages between the many digital libraries and information services. Universal access to digital libraries and information services is a goal. Digital libraries are not limited to document surrogates; they extend to digital artifacts that cannot be represented or distributed in printed formats. (p.9) One interesting aspect of Drabenstott’s summary definition is that, while there is a user-orientation stated, as well as references to technology and information resources, there is no reference to the role of the librarian in the digital library. Another report by Saffady (1995) cites thirty definitions of the digital library published between 1991 and 1994. Among the terms Saffady uses in describing these various definitions are: “repositories of.. .information assets,” “large information repositories,” “various online databases and.. .information products,” “computer storage devices on which information repositories reside,” “computerized, networked library systems,” accessible through the Internet,” “CD-ROM information products,” “database servers,” “libraries with online catalogs,” and “collections of computer-processible information” (p. 2 2 3 ) . Saffady summarizes these definitions by stating: “Broadly defined, a digital library is a collection of computer-processible information or a repository for such information” (p. 223). He then narrows the definition by noting that “a digital library is a library that maintains all, or a substantial part, of its collection in computer-processible form as an alternative, supplement, or complement to the conventional printed and microform materials that currently domiSLOAN/SERVICE PERSPECTIVES FOR THE DIGITAL LIBRARY I 19 nate library collections” (p. 224). Without exception, each of the definitions Saffady cites focuses on collections, repositories, or information resources. In another paper, Nurnberg, Furata, Leggett, Marshall, and Shipman (1995) ask “Why is a digital library called a library at all?” They state that the traditional physical library can provide a basis for discussing the digital library and arrive at this definition: the traditional library “deals with physical data” while the digital library works “primarily with digital data.” Once again, a definition that is striking in its neglect of service perspectives. In a paper presented at the Digital Libraries ’94 conference, Miksa and Doty (1994) again discuss the digital library as a “collection” or a series of collections. In another paper, Schatz and Chen (1996) state that digital libraries are “network information systems,” accessing resources “from and across large collections.” What do all these definitions of the “digital library” have in common? An emphasis on technology and information resources and a very noticeable lack of discussion of the service aspects of the digital library. Why is it important to take a look at how the digital library is defined? As more definitions of the digital library are published, with an absence of the service perspective and little treatment of the importance of librarian/ user collaboration, we perhaps draw closer to the Redundancy Theory (Hathorn, 1997) in which “the rise of digitized information threatens to make librarians practically obsolete.” People may well begin to believe that, as physical barriers to access to information are reduced through technological means, the services of the librarian are no longer as necessary. HUMAN OF THE DIGITAL ASPECTS IBRARY While considering the future, it sometimes is helpful to examine the past. As such, it might be useful to reflect on Jesse Shera’s oft-quoted definition of a library: “To bring together human beings and recorded knowledge in as fruitful a relationship as is humanly possible” (in Dysart &Jones, 1995, p. 16). Digital library proponents must consider the role of people (i.e., as users and service providers) if the digital library is to be truly beneficial. Technology and information resources on their own cannot make up an effective digital library. While a good deal of the literature on digital libraries emphasizes technology and resources at the expense of the service perspective, a number of authors and researchers have considered human interaction in the digital library environment. A number of studies at Lancaster University (Twidale, 1995, 1996; Twidale, Nichols, & Paice, 1996; Crabtree, Twidale, O’Brien, & Nichols, 1997; Nichols, Twidale, & Paice, 1997) have considered the importance of human interaction in the digital library. These studies focus on the social interactions of library users with librarians, librarians with librarians, and users with other users. By studying these collaborations in physical library settings, the authors have drawn some general conclusions that might be applied to digital library design: Collaboration between users, and between users and system personnel, is a significant element of searching in current information systems. The development of electronic libraries threatens existing forms of collaboration but also offers opportunities for new forms of collaboration. The sharing of both the search product and the search process are important for collaborative activities (including the education of searchers). There exist$ great potential for improving search effectiveness through the re-use of previous searches; this is one mechanism for adding value to existing databases. Browsing is not restricted to browsing for inanimate objects; browsing for people is also possible and could be a valuable source ofinformation. Searchers of databases need externalized help to reduce their cognitive load during the search process. This can be provided both by traditional paper-based technology and through computerized systems (Twidale et al., 1996). In a paper presented at the Digital Libraries ’94Conference, Ackerman (1994) stresses that, while the concept of the digital library “includes solving many of the technical and logistical issues in current libraries and information seeking,” it would be a mistake to consider solely the mechanical aspects of the library while ignoring the “useful social interactions in information seeking.” Ackerman outlines four ways in which social interaction can be helpful in the information-seeking process: 1. One may need to consult another person in order to know what to know (help in selecting information). 2. One may need to consult a person to obtain information that is transitory in nature and as such is unindexed (seeking informal information). 3. One may need to consult others for assistance in obtaining/understanding information that is highly contextual in nature rather than merely obtaining the information in a textual format (information seekers often have highly specific needs and interests). 4. Libraries serve important social functions, e.g., students and/or faculty meeting each other in hallways, study areas, etc. (socializing function). Ackerman notes that these points “all argue for the inclusion of some form of social interaction within the digital library. Such interaction should include not only librarians (or some human helper), but other users as well.” In a paper for the Digital Libraries ’96 Conference, Brewer, Ding, Hahn, ",
"title": ""
},
{
"docid": "53bbb6d5467574af4533607c95505ee4",
"text": "The synthesis of genetics-based machine learning and fuzzy logic is beginning to show promise as a potent tool in solving complex control problems in multi-variate non-linear systems. In this paper an overview of current research applying the genetic algorithm to fuzzy rule based control is presented. A novel approach to genetics-based machine learning of fuzzy controllers, called a Pittsburgh Fuzzy Classifier System # 1 (P-FCS1) is proposed. P-FCS1 is based on the Pittsburgh model of learning classifier systems and employs variable length rule-sets and simultaneously evolves fuzzy set membership functions and relations. A new crossover operator which respects the functional linkage between fuzzy rules with overlapping input fuzzy set membership functions is introduced. Experimental results using P-FCS l are reported and compared with other published results. Application of P-FCS1 to a distributed control problem (dynamic routing in computer networks) is also described and experimental results are presented.",
"title": ""
},
{
"docid": "f55e380c158ae01812f009fd81642d7f",
"text": "In this paper, we proposed a system to effectively create music mashups – a kind of re-created music that is made by mixing parts of multiple existing music pieces. Unlike previous studies which merely generate mashups by overlaying music segments on one single base track, the proposed system creates mashups with multiple background (e.g. instrumental) and lead (e.g. vocal) track segments. So, besides the suitability between the vertically overlaid tracks (i.e. vertical mashability) used in previous studies, we proposed to further consider the suitability between the horizontally connected consecutive music segments (i.e. horizontal mashability) when searching for proper music segments to be combined. On the vertical side, two new factors: “harmonic change balance” and “volume weight” have been considered. On the horizontal side, the methods used in the studies of medley creation are incorporated. Combining vertical and horizontal mashabilities together, we defined four levels of mashability that may be encountered and found the proper solution to each of them. Subjective evaluations showed that the proposed four levels of mashability can appropriately reflect the degrees of listening enjoyment. Besides, by taking the newly proposed vertical mashability measurement into account, the improvement in user satisfaction is statistically significant.",
"title": ""
},
{
"docid": "aa622e064469291fedfadfe36afe3aef",
"text": "Multiple kernel clustering (MKC), which performs kernel-based data fusion for data clustering, is an emerging topic. It aims at solving clustering problems with multiple cues. Most MKC methods usually extend existing clustering methods with a multiple kernel learning (MKL) setting. In this paper, we propose a novel MKC method that is different from those popular approaches. Centered kernel alignment—an effective kernel evaluation measure—is employed in order to unify the two tasks of clustering and MKL into a single optimization framework. To solve the formulated optimization problem, an efficient two-step iterative algorithm is developed. Experiments on several UCI datasets and face image datasets validate the effectiveness and efficiency of our MKC algorithm.",
"title": ""
},
{
"docid": "799883184a752a4f97eeb7ba474bbb8b",
"text": "This paper presents the design and implementation of a distributed virtual reality (VR) platform that was developed to support the training of multiple users who must perform complex tasks in which situation assessment and critical thinking are the primary components of success. The system is fully immersive and multimodal, and users are represented as tracked, full-body figures. The system supports the manipulation of virtual objects, allowing users to act upon the environment in a natural manner. The underlying intelligent simulation component creates an interactive, responsive world in which the consequences of such actions are presented within a realistic, time-critical scenario. The focus of this work has been on the training of medical emergency-response personnel. BioSimMER, an application of the system to training first responders to an act of bio-terrorism, has been implemented and is presented throughout the paper as a concrete example of how the underlying platform architecture supports complex training tasks. Finally, a preliminary field study was performed at the Texas Engineering Extension Service Fire Protection Training Division. The study focused on individual, rather than team, interaction with the system and was designed to gauge user acceptance of VR as a training tool. The results of this study are presented.",
"title": ""
},
{
"docid": "b9239e05f0544c83597a0204bf22ec30",
"text": "In this paper, two data mining algorithms are applied to build a churn prediction model using credit card data collected from a real Chinese bank. The contribution of four variable categories: customer information, card information, risk information, and transaction activity information are examined. The paper analyzes a process of dealing with variables when data is obtained from a database instead of a survey. Instead of considering the all 135 variables into the model directly, it selects the certain variables from the perspective of not only correlation but also economic sense. In addition to the accuracy of analytic results, the paper designs a misclassification cost measurement by taking the two types error and the economic sense into account, which is more suitable to evaluate the credit card churn prediction model. The algorithms used in this study include logistic regression and decision tree which are proven mature and powerful classification algorithms. The test result shows that regression performs a little better than decision tree. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "19a9d9286f5af35bac3e051e9bc5213b",
"text": "The healthcare environment is more and more data enriched, but the amount of knowledge getting from those data is very less, because lack of data analysis tools. We need to get the hidden relationships from the data. In the healthcare system to predict the heart attack perfectly, there are some techniques which are already in use. There is some lack of accuracy in the available techniques like Naïve Bayes. Here, this paper proposes the system which uses neural network and Decision tree (ID3) to predict the heart attacks. Here the dataset with 6 attributes is used to diagnose the heart attacks. The dataset used is acath heart attack dataset provided by UCI machine learning repository. The results of the prediction give more accurate output than the other techniques.",
"title": ""
},
{
"docid": "aa54c82efcb94caf8fd224f362631167",
"text": "A current-reused quadrature voltage-controlled oscillator (CR-QVCO) is proposed with the cross-coupled transformer-feedback technology for the quadrature signal generation. This CR-QVCO has the advantages of low-voltage/low-power operation with an adequate phase noise performance. A compact differential three-port transformer, in which two half-circle secondary coils are carefully designed to optimize the effective turn ratio and the coupling factor, is newly constructed to satisfy the need of signal coupling and to save the area consumption simultaneously. The quadrature oscillator providing a center frequency of 7.128 GHz for the ultrawideband (UWB) frequency synthesizer use is demonstrated in a 0.18 mum RF CMOS technology. The oscillator core dissipates 2.2 mW from a 1 V supply and occupies an area of 0.48 mm2. A tuning range of 330 MHz (with a maximum control voltage of 1.8 V) can be achieved to stand the frequency shift caused by the process variation. The measured phase noise is -111.2 dBc/Hz at 1 MHz offset from the center frequency. The IQ phase error shown is less than 2deg. The calculated figure-of-merit (FOM) is 184.8 dB.",
"title": ""
},
{
"docid": "a245aca07bd707ee645cf5cb283e7c5e",
"text": "The paradox of blunted parathormone (PTH) secretion in patients with severe hypomagnesemia has been known for more than 20 years, but the underlying mechanism is not deciphered. We determined the effect of low magnesium on in vitro PTH release and on the signals triggered by activation of the calcium-sensing receptor (CaSR). Analogous to the in vivo situation, PTH release from dispersed parathyroid cells was suppressed under low magnesium. In parallel, the two major signaling pathways responsible for CaSR-triggered block of PTH secretion, the generation of inositol phosphates, and the inhibition of cAMP were enhanced. Desensitization or pertussis toxin-mediated inhibition of CaSR-stimulated signaling suppressed the effect of low magnesium, further confirming that magnesium acts within the axis CaSR-G-protein. However, the magnesium binding site responsible for inhibition of PTH secretion is not identical with the extracellular ion binding site of the CaSR, because the magnesium deficiency-dependent signal enhancement was not altered on CaSR receptor mutants with increased or decreased affinity for calcium and magnesium. By contrast, when the magnesium affinity of the G alpha subunit was decreased, CaSR activation was no longer affected by magnesium. Thus, the paradoxical block of PTH release under magnesium deficiency seems to be mediated through a novel mechanism involving an increase in the activity of G alpha subunits of heterotrimeric G-proteins.",
"title": ""
},
{
"docid": "126d8080f7dd313d534a95d8989b0fbd",
"text": "Intrusion prevention mechanisms are largely insufficient for protection of databases against Information Warfare attacks by authorized users and has drawn interest towards intrusion detection. We visualize the conflicting motives between an attacker and a detection system as a multi-stage game between two players, each trying to maximize his payoff. We consider the specific application of credit card fraud detection and propose a fraud detection system based on a game-theoretic approach. Not only is this approach novel in the domain of Information Warfare, but also it improvises over existing rule-based systems by predicting the next move of the fraudster and learning at each step.",
"title": ""
},
{
"docid": "13d8ce0c85befb38e6f2da583ac0295b",
"text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.",
"title": ""
},
{
"docid": "e4e2bb8bf8cc1488b319a59f82a71f08",
"text": "We aim to dismantle the prevalent black-box neural architectures used in complex visual reasoning tasks, into the proposed eXplainable and eXplicit Neural Modules (XNMs), which advance beyond existing neural module networks towards using scene graphs — objects as nodes and the pairwise relationships as edges — for explainable and explicit reasoning with structured knowledge. XNMs allow us to pay more attention to teach machines how to “think”, regardless of what they “look”. As we will show in the paper, by using scene graphs as an inductive bias, 1) we can design XNMs in a concise and flexible fashion, i.e., XNMs merely consist of 4 meta-types, which significantly reduce the number of parameters by 10 to 100 times, and 2) we can explicitly trace the reasoning-flow in terms of graph attentions. XNMs are so generic that they support a wide range of scene graph implementations with various qualities. For example, when the graphs are detected perfectly, XNMs achieve 100% accuracy on both CLEVR and CLEVR CoGenT, establishing an empirical performance upper-bound for visual reasoning; when the graphs are noisily detected from real-world images, XNMs are still robust to achieve a competitive 67.5% accuracy on VQAv2.0, surpassing the popular bag-of-objects attention models without graph structures.",
"title": ""
}
] |
scidocsrr
|
34b3a30cb068e4dacb3475ae56713a9c
|
Convolutional RNN: An enhanced model for extracting features from sequential data
|
[
{
"docid": "56321ec6dfc3d4c55fc99125e942cf44",
"text": "The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed – such as cross-validation or percentage splits without proper instance definition – prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most reallife settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.",
"title": ""
}
] |
[
{
"docid": "9b0bddb295cd7485ae9c3bfcf3b639a3",
"text": "Graphics processing units (GPUs) continue to grow in popularity for general-purpose, highly parallel, high-throughput systems. This has forced GPU vendors to increase their focus on general purpose workloads, sometimes at the expense of the graphics-specific workloads. Using GPUs for general-purpose computation is a departure from the driving forces behind programmable GPUs that were focused on a narrow subset of graphics rendering operations. Rather than focus on purely graphics-related or general-purpose use, we have designed and modeled an architecture that optimizes for both simultaneously to efficiently handle all GPU workloads. In this paper, we present Nyami, a co-optimized GPU architecture and simulation model with an open-source implementation written in Verilog. This approach allows us to more easily explore the GPU design space in a synthesizable, cycle-precise, modular environment. An instruction-precise functional simulator is provided for co-simulation and verification. Overall, we assume a GPU may be used as a general-purpose GPU (GPGPU) or a graphics engine and account for this in the architecture's construction and in the options and modules selectable for synthesis and simulation. To demonstrate Nyami's viability as a GPU research platform, we exploit its flexibility and modularity to explore the impact of a set of architectural decisions. These include sensitivity to cache size and associativity, barrel and switch-on-stall multithreaded instruction scheduling, and software vs. hardware implementations of rasterization. Through these experiments, we gain insight into commonly accepted GPU architecture decisions, adapt the architecture accordingly, and give examples of the intended use as a GPU research tool.",
"title": ""
},
{
"docid": "cc6e7b82468243d7f92861fa155c10ee",
"text": "Road throughput can be increased by driving at small inter-vehicle time gaps. The amplification of velocity disturbances in upstream direction, however, poses limitations to the minimum feasible time gap. String-stable behavior is thus considered an essential requirement for the design of automatic distance control systems, which are needed to allow for safe driving at time gaps well below 1 s. Theoretical analysis reveals that this requirement can be met using wireless inter-vehicle communication to provide real-time information of the preceding vehicle, in addition to the information obtained by common Adaptive Cruise Control (ACC) sensors. In order to validate these theoretical results and to demonstrate the technical feasibility, the resulting control system, known as Cooperative ACC (CACC), is implemented on a test fleet consisting of six passenger vehicles. Experiments clearly show that the practical results match the theoretical analysis, thereby indicating the possibilities for short-distance vehicle following.",
"title": ""
},
{
"docid": "a8af37df01ad45139589e82bd81deb61",
"text": "As technology use continues to rise, especially among young individuals, there are concerns that excessive use of technology may impact academic performance. Researchers have started to investigate the possible negative effects of technology use on college academic performance, but results have been mixed. The following study seeks to expand upon previous studies by exploring the relationship among the use of a wide variety of technology forms and an objective measure of academic performance (GPA) using a 7-day time diary data collection method. The current study also seeks to examine both underclassmen and upperclassmen to see if these groups differ in how they use technology. Upperclassmen spent significantly more time using technology for academic and workrelated purposes, whereas underclassmen spent significantly more time using cell phones, online chatting, and social networking sites. Significant negative correlations with GPA emerged for television, online gaming, adult site, and total technology use categories. Keyword: Technology use, academic performance, post-secondary education.",
"title": ""
},
{
"docid": "93b880dbc635a49ffc7a9e6906b094f6",
"text": "Abstract machines provide a certain separation between platform-dependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the specific abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, program-independent profiling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/verification of time properties, certification of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization",
"title": ""
},
{
"docid": "a5911891697a1b2a407f231cf0ad6c28",
"text": "In this paper, a new control method for the parallel operation of inverters operating in an island grid or connected to an infinite bus is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between inverters. Each inverter supplies a current that is the result of the voltage difference between a reference ac voltage source and the grid voltage across a virtual complex impedance. The reference ac voltage source is synchronized with the grid, with a phase shift, depending on the difference between rated and actual grid frequency. A detailed analysis shows that this approach has a superior behavior compared to existing methods, regarding the mitigation of voltage harmonics, short-circuit behavior and the effectiveness of the frequency and voltage control, as it takes the R to X line impedance ratio into account. Experiments show the behavior of the method for an inverter feeding a highly nonlinear load and during the connection of two parallel inverters in operation.",
"title": ""
},
{
"docid": "d4774f784e3b439dfb77b0f10a8c4950",
"text": "As consequence of the considerable increase of the electrical power demand in vehicles, the adoption of a combined direct-drive starter/alternator system is being seriously pursued and a new generation of vehicle alternators delivering power up to 6 kW over the entire range of the engine speed is soon expected for use with connection to a 42 V bus. The surface permanent magnet (SPM) machines offer many of the features sought for such future automotive power generation systems, and thereby a substantial improvement in the control of their output voltage would allow the full exploitation of their attractive characteristics in the direct-drive starter/alternator application without significant penalties otherwise resulting on the machine-fed power converter. Concerning that, this paper reports on the original solution adopted in a proof-of-concept axial-flux permanent magnet machine (AFPM) prototype to provide weakening of the flux linkage with speed and thereby achieve constant-power operation over a wide speed range. The principle being utilized is introduced and described, including design dimensions and experimental data taken from the proof-of-concept machine prototype.",
"title": ""
},
{
"docid": "e34ba7711bf03aedfc34ce3b7c4335b3",
"text": "Graph layout problems are a particular class of combinatorial optimization problems whose goal is to find a linear layout of an input graph in such way that a certain objective cost is optimized. This survey considers their motivation, complexity, approximation properties, upper and lower bounds, heuristics and probabilistic analysis on random graphs. The result is a complete view of the current state of the art with respect to layout problems from an algorithmic point of view.",
"title": ""
},
{
"docid": "57d5b69473898b0ae31fcb2f7b0660af",
"text": "This paper describes an approach for managing the interaction of human users with computer-controlled agents in an interactive narrative-oriented virtual environment. In these kinds of systems, the freedom of the user to perform whatever action she desires must be balanced with the preservation of the storyline used to control the system's characters. We describe a technique, narrative mediation, that exploits a plan-based model of narrative structure to manage and respond to users' actions inside a virtual world. We define two general classes of response to situations where users execute actions that interfere with story structure: accommodation and intervention. Finally, we specify an architecture that uses these definitions to monitor and automatically characterize user actions, and to compute and implement responses to unanticipated activity. The approach effectively integrates user action and system response into the unfolding narrative, providing for the balance between a user's sense of control within the story world and the user's sense of coherence of the overall narrative.",
"title": ""
},
{
"docid": "6254241cb765d5a280c5f4fb9d599944",
"text": "Photodegradation is an abiotic process in the dissipation of pesticides where molecular excitation by absorption of light energy results in various organic reactions, or reactive oxygen species such as OH*, O3, and 1O2 specifically or nonspecifically oxidize the functional groups in a pesticide molecule. In the case of soil photolysis, the heterogeneity of soil together with soil properties varying with meteorological conditions makes photolytic processes difficult to understand. In contrast to solution photolysis, where light is attenuated by solid particles, both absorption and emission profiles of a pesticide are modified through interaction with soil components such as adsorption to clay minerals or solubilization to humic substances. Diffusion of a pesticide molecule results in heterogeneous concentration in soil, and either steric constraint or photoinduced generation of reactive species under the limited mobility sometimes modifies degradation mechanisms. Extensive investigations of meteorological effects on soil moisture and temperature as well as development of an elaborate testing chamber controlling these factors seems to provide better conditions for researchers to examine the photodegradation of pesticides on soil under conditions similar to the real environment. However, the mechanistic analysis of photodegradation has just begun, and there still remain many issues to be clarified. For example, how photoprocesses affect the electronic states of pesticide molecules on soil or how the reactive oxygen species are generated on soil via interaction with clay minerals and humic substances should be investigated in greater detail. From this standpoint, the application of diffuse reflectance spectroscopy and usage or development of various probes to trap intermediate species is highly desired. Furthermore, only limited information is yet available on the reactions of pesticides on soil with atmospheric chemical species. For photodegradation on plants, the importance of an emission spectrum of the light source near its surface was clarified. Most photochemical information comes from photolysis in organic solvents or on glass surfaces and/or plant metabolism studies. Epicuticular waxes may be approximated by long-chain hydrocarbons as a very viscous liquid or solid, but the existing form of pesticide molecules in waxes is still obscure. Either coexistence of formulation agents or steric constraint in the rigid medium would cause a change of molecular excitation, deactivation, and photodegradation mechanisms, which should be further investigated to understand the dissipation profiles of a pesticide in or on crops in the field. A thin-layer system with a coat of epicuticular waxes extracted from leaves or isolated cuticles has been utilized as a model, but its application has been very limited. There appear to be gaps in our knowledge about the surface chemistry and photochemistry of pesticides in both rigid media and plant metabolism. Photodegradation studies, for example, by using these models to eliminate contribution from metabolic conversion as much as possible, should be extensively conducted in conjunction with wax chemistry, with the controlling factors being clarified. As with soil surfaces, the effects of atmospheric oxidants should also be investigated. Based on this knowledge, new methods of kinetic analysis or a device simulating the fate of pesticides on these surfaces could be more rationally developed. Concerning soil photolysis, detailed mechanistic analysis of the mobility and fate of pesticides together with volatilization from soil surfaces has been initiated and its spatial distribution with time has been simulated with reasonable precision on a laboratory scale. Although mechanistic analyses have been conducted on penetration of pesticides through cuticular waxes, its combination with photodegradation to simulate the real environment is awaiting further investigation.",
"title": ""
},
{
"docid": "b5c2e36e805f3ca96cde418137ed0239",
"text": "PURPOSE\nTo report a novel method for measuring the degree of inferior oblique muscle overaction and to investigate the correlation with other factors.\n\n\nDESIGN\nCross-sectional diagnostic study.\n\n\nMETHODS\nOne hundred and forty-two eyes (120 patients) were enrolled in this study. Subjects underwent a full orthoptic examination and photographs were obtained in the cardinal positions of gaze. The images were processed using Photoshop and analyzed using the ImageJ program to measure the degree of inferior oblique muscle overaction. Reproducibility or interobserver variability was assessed by Bland-Altman plots and by calculation of the intraclass correlation coefficient (ICC). The correlation between the degree of inferior oblique muscle overaction and the associated factors was estimated with linear regression analysis.\n\n\nRESULTS\nThe mean angle of inferior oblique muscle overaction was 17.8 ± 10.1 degrees (range, 1.8-54.1 degrees). The 95% limit of agreement of interobserver variability for the degree of inferior oblique muscle overaction was ±1.76 degrees, and ICC was 0.98. The angle of inferior oblique muscle overaction showed significant correlation with the clinical grading scale (R = 0.549, P < .001) and with hypertropia in the adducted position (R = 0.300, P = .001). The mean angles of inferior oblique muscle overaction classified into grades 1, 2, 3, and 4 according to the clinical grading scale were 10.5 ± 9.1 degrees, 16.8 ± 7.8 degrees, 24.3 ± 8.8 degrees, and 40.0 ± 12.2 degrees, respectively (P < .001).\n\n\nCONCLUSIONS\nWe describe a new method for measuring the degree of inferior oblique muscle overaction using photographs of the cardinal positions. It has the potential to be a diagnostic tool that measures inferior oblique muscle overaction with minimal observer dependency.",
"title": ""
},
{
"docid": "6e707e17ce2079a9c7cf5c02cd1744c7",
"text": "A data-driven identification of dynamical systems requiring only minimal prior knowledge is promising whenever no analytically derived model structure is available, e.g., from first principles in physics. However, meta-knowledge on the system’s behavior is often given and should be exploited: Stability as fundamental property is essential when the model is used for controller design or movement generation. Therefore, this paper proposes a framework for learning stable stochastic systems from data. We focus on identifying a state-dependent coefficient form of the nonlinear stochastic model which is globally asymptotically stable according to probabilistic Lyapunov methods. We compare our approach to other state of the art methods on real-world datasets in terms of flexibility and stability.",
"title": ""
},
{
"docid": "07295446da02d11750e05f496be44089",
"text": "As robots become more ubiquitous and capable of performing complex tasks, the importance of enabling untrained users to interact with them has increased. In response, unconstrained natural-language interaction with robots has emerged as a significant research area. We discuss the problem of parsing natural language commands to actions and control structures that can be readily implemented in a robot execution system. Our approach learns a parser based on example pairs of English commands and corresponding control language expressions. We evaluate this approach in the context of following route instructions through an indoor environment, and demonstrate that our system can learn to translate English commands into sequences of desired actions, while correctly capturing the semantic intent of statements involving complex control structures. The procedural nature of our formal representation allows a robot to interpret route instructions online while moving through a previously unknown environment. 1 Motivation and Problem Statement In this paper, we discuss our work on grounding natural language–interpreting human language into semantically informed structures in the context of robotic perception and actuation. To this end, we explore the question of interpreting natural language commands so they can be executed by a robot, specifically in the context of following route instructions through a map. Natural language (NL) is a rich, intuitive mechanism by which humans can interact with systems around them, offering sufficient signal to support robot task planning. Human route instructions include complex language constructs, which robots must be able to execute without being given a fully specified world model such as a map. Our goal is to investigate whether it is possible to learn a parser that produces · All authors are affiliated with the University of Washington, Seattle, USA. · Email: {cynthia,eherbst,lsz,fox}@cs.washington.edu",
"title": ""
},
{
"docid": "db42b2c5b9894943c3ba05fad07ee2f9",
"text": "This paper deals principally with the grid connection problem of a kite-based system, named the “Kite Generator System (KGS).” It presents a control scheme of a closed-orbit KGS, which is a wind power system with a relaxation cycle. Such a system consists of a kite with its orientation mechanism and a power transformation system that connects the previous part to the electric grid. Starting from a given closed orbit, the optimal tether's length rate variation (the kite's tether radial velocity) and the optimal orbit's period are found. The trajectory-tracking problem is not considered in this paper; only the kite's tether radial velocity is controlled via the electric machine rotation velocity. The power transformation system transforms the mechanical energy generated by the kite into electrical energy that can be transferred to the grid. A Matlab/simulink model of the KGS is employed to observe its behavior, and to insure the control of its mechanical and electrical variables. In order to improve the KGS's efficiency in case of slow changes of wind speed, a maximum power point tracking (MPPT) algorithm is proposed.",
"title": ""
},
{
"docid": "64a3877186106c911891f4f6fe7fbede",
"text": "In this paper, we present a multimodal emotion recognition framework called EmotionMeter that combines brain waves and eye movements. To increase the feasibility and wearability of EmotionMeter in real-world applications, we design a six-electrode placement above the ears to collect electroencephalography (EEG) signals. We combine EEG and eye movements for integrating the internal cognitive states and external subconscious behaviors of users to improve the recognition accuracy of EmotionMeter. The experimental results demonstrate that modality fusion with multimodal deep neural networks can significantly enhance the performance compared with a single modality, and the best mean accuracy of 85.11% is achieved for four emotions (happy, sad, fear, and neutral). We explore the complementary characteristics of EEG and eye movements for their representational capacities and identify that EEG has the advantage of classifying happy emotion, whereas eye movements outperform EEG in recognizing fear emotion. To investigate the stability of EmotionMeter over time, each subject performs the experiments three times on different days. EmotionMeter obtains a mean recognition accuracy of 72.39% across sessions with the six-electrode EEG and eye movement features. These experimental results demonstrate the effectiveness of EmotionMeter within and between sessions.",
"title": ""
},
{
"docid": "5ce93a1c09b4da41f0cc920d5c7e6bdc",
"text": "Humanitarian operations comprise a wide variety of activities. These activities differ in temporal and spatial scope, as well as objectives, target population and with respect to the delivered goods and services. Despite a notable variety of agendas of the humanitarian actors, the requirements on the supply chain and supporting logistics activities remain similar to a large extent. This motivates the development of a suitably generic reference model for supply chain processes in the context of humanitarian operations. Reference models have been used in commercial environments for a range of purposes, such as analysis of structural, functional, and behavioural properties of supply chains. Our process reference model aims to support humanitarian organisations when designing appropriately adapted supply chain processes to support their operations, visualising their processes, measuring their performance and thus, improving communication and coordination of organisations. A top-down approach is followed in which modular process elements are developed sequentially and relevant performance measures are identified. This contribution is conceptual in nature and intends to lay the foundation for future research.",
"title": ""
},
{
"docid": "b4f2d62f5c99fc3fb2b8c548adb71578",
"text": "The successful motor rehabilitation of stroke patients requires early intensive and task-specific therapy. A recent Cochrane Review, although based on a limited number of randomized controlled trials (RCTs), showed that early robotic training of the upper limb (i.e., during acute or subacute phase) can enhance motor learning and improve functional abilities more than chronic-phase training. In this article, a new subacute-phase RCT with the Neuro-Rehabilitation-roBot (NeReBot) is presented. While in our first study we used the NeReBot in addition to conventional therapy, in this new trial we used the same device in substitution of standard proximal upper-limb rehabilitation. With this protocol, robot patients achieved similar reductions in motor impairment and enhancements in paretic upper-limb function to those gained by patients in a control group. By analyzing these results and those of previous studies, we hypothesize a new robotic protocol for acute and subacute stroke patients based on both treatment modalities (in addition and in substitution).",
"title": ""
},
{
"docid": "e19d60d8638f1afa26830c4fe06a1c53",
"text": "An option is a short-term skill consisting of a control policy for a specified region of the state space, and a termination condition recognizing leaving that region. In prior work, we proposed an algorithm called Deep Discovery of Options (DDO) to discover options to accelerate reinforcement learning in Atari games. This paper studies an extension to robot imitation learning, called Discovery of Deep Continuous Options (DDCO), where low-level continuous control skills parametrized by deep neural networks are learned from demonstrations. We extend DDO with: (1) a hybrid categorical–continuous distribution model to parametrize high-level policies that can invoke discrete options as well continuous control actions, and (2) a cross-validation method that relaxes DDO’s requirement that users specify the number of options to be discovered. We evaluate DDCO in simulation of a 3-link robot in the vertical plane pushing a block with friction and gravity, and in two physical experiments on the da Vinci surgical robot, needle insertion where a needle is grasped and inserted into a silicone tissue phantom, and needle bin picking where needles and pins are grasped from a pile and categorized into bins. In the 3-link arm simulation, results suggest that DDCO can take 3x fewer demonstrations to achieve the same reward compared to a baseline imitation learning approach. In the needle insertion task, DDCO was successful 8/10 times compared to the next most accurate imitation learning baseline 6/10. In the surgical bin picking task, the learned policy successfully grasps a single object in 66 out of 99 attempted grasps, and in all but one case successfully recovered from failed grasps by retrying a second time.",
"title": ""
},
{
"docid": "a4e122d0b827d25bea48d41487437d74",
"text": "We introduce UniAuth, a set of mechanisms for streamlining authentication to devices and web services. With UniAuth, a user first authenticates himself to his UniAuth client, typically his smartphone or wearable device. His client can then authenticate to other services on his behalf. In this paper, we focus on exploring the user experiences with an early iPhone prototype called Knock x Knock. To manage a variety of accounts securely in a usable way, Knock x Knock incorporates features not supported in existing password managers, such as tiered and location-aware lock control, authentication to laptops via knocking, and storing credentials locally while working with laptops seamlessly. In two field studies, 19 participants used Knock x Knock for one to three weeks with their own devices and accounts. Our participants were highly positive about Knock x Knock, demonstrating the desirability of our approach. We also discuss interesting edge cases and design implications.",
"title": ""
},
{
"docid": "bd2fcdd0b7139bf719f1ec7ffb4fe5d5",
"text": "Much is known on how facial expressions of emotion are produced, including which individual muscles are most active in each expression. Yet, little is known on how this information is interpreted by the human visual system. This paper presents a systematic study of the image dimensionality of facial expressions of emotion. In particular, we investigate how recognition degrades when the resolution of the image (i.e., number of pixels when seen as a 5.3 by 8 degree stimulus) is reduced. We show that recognition is only impaired in practice when the image resolution goes below 20 × 30 pixels. A study of the confusion tables demonstrates that each expression of emotion is consistently confused by a small set of alternatives and that the confusion is not symmetric, i.e., misclassifying emotion a as b does not imply we will mistake b for a. This asymmetric pattern is consistent over the different image resolutions and cannot be explained by the similarity of muscle activation. Furthermore, although women are generally better at recognizing expressions of emotion at all resolutions, the asymmetry patterns are the same. We discuss the implications of these results for current models of face perception.",
"title": ""
},
{
"docid": "0d3403ce2d1613c1ea6b938b3ba9c5e6",
"text": "Extracting a set of generalizable rules that govern the dynamics of complex, high-level interactions between humans based only on observations is a high-level cognitive ability. Mastery of this skill marks a significant milestone in the human developmental process. A key challenge in designing such an ability in autonomous robots is discovering the relationships among discriminatory features. Identifying features in natural scenes that are representative of a particular event or interaction (i.e. »discriminatory features») and then discovering the relationships (e.g., temporal/spatial/spatio-temporal/causal) among those features in the form of generalized rules are non-trivial problems. They often appear as a »chicken-and-egg» dilemma. This paper proposes an end-to-end learning framework to tackle these two problems in the context of learning generalized, high-level rules of human interactions from structured demonstrations. We employed our proposed deep reinforcement learning framework to learn a set of rules that govern a behavioral intervention session between two agents based on observations of several instances of the session. We also tested the accuracy of our framework with human subjects in diverse situations.",
"title": ""
}
] |
scidocsrr
|
b099481d2b08abd95c109ef958328a6a
|
Adversarial Attack? Don't Panic
|
[
{
"docid": "88a1549275846a4fab93f5727b19e740",
"text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"title": ""
},
{
"docid": "41d5b01cf6f731db0752af0953395327",
"text": "Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being “too linear” (Goodfellow et al., 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing; linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation.",
"title": ""
},
{
"docid": "21a68f76ed6d18431f446398674e4b4e",
"text": "With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.",
"title": ""
},
{
"docid": "17611b0521b69ad2b22eeadc10d6d793",
"text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.",
"title": ""
},
{
"docid": "b12bae586bc49a12cebf11cca49c0386",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
}
] |
[
{
"docid": "71d744aefd254acfc24807d805fb066b",
"text": "Bitcoin provides only pseudo-anonymous transactions, which can be exploited to link payers and payees -- defeating the goal of anonymous payments. To thwart such attacks, several Bitcoin mixers have been proposed, with the objective of providing unlinkability between payers and payees. However, existing Bitcoin mixers can be regarded as either insecure or inefficient.\n We present Obscuro, a highly efficient and secure Bitcoin mixer that utilizes trusted execution environments (TEEs). With the TEE's confidentiality and integrity guarantees for code and data, our mixer design ensures the correct mixing operations and the protection of sensitive data (i.e., private keys and mixing logs), ruling out coin theft and address linking attacks by a malicious service provider. Yet, the TEE-based implementation does not prevent the manipulation of inputs (e.g., deposit submissions, blockchain feeds) to the mixer, hence Obscuro is designed to overcome such limitations: it (1) offers an indirect deposit mechanism to prevent a malicious service provider from rejecting benign user deposits; and (2) scrutinizes blockchain feeds to prevent deposits from being mixed more than once (thus degrading anonymity) while being eclipsed from the main blockchain branch. In addition, Obscuro provides several unique anonymity features (e.g., minimum mixing set size guarantee, resistant to dropping user deposits) that are not available in existing centralized and decentralized mixers.\n Our prototype of Obscuro is built using Intel SGX and we demonstrate its effectiveness in Bitcoin Testnet. Our implementation mixes 1000 inputs in just 6.49 seconds, which vastly outperforms all of the existing decentralized mixers.",
"title": ""
},
{
"docid": "3392de95bfc0e16776550b2a0a8fa00e",
"text": "This paper presents a new type of three-phase voltage source inverter (VSI), called three-phase dual-buck inverter. The proposed inverter does not need dead time, and thus avoids the shoot-through problems of traditional VSIs, and leads to greatly enhanced system reliability. Though it is still a hard-switching inverter, the topology allows the use of power MOSFETs as the active devices instead of IGBTs typically employed by traditional hard-switching VSIs. As a result, the inverter has the benefit of lower switching loss, and it can be designed at higher switching frequency to reduce current ripple and the size of passive components. A unified pulsewidth modulation (PWM) is introduced to reduce computational burden in real-time implementation. Different PWM methods were applied to a three-phase dual-buck inverter, including sinusoidal PWM (SPWM), space vector PWM (SVPWM) and discontinuous space vector PWM (DSVPWM). A 2.5 kW prototype of a three-phase dual-buck inverter and its control system has been designed and tested under different dc bus voltage and modulation index conditions to verify the feasibility of the circuit, the effectiveness of the controller, and to compare the features of different PWMs. Efficiency measurement of different PWMs has been conducted, and the inverter sees peak efficiency of 98.8% with DSVPWM.",
"title": ""
},
{
"docid": "df8885ad4dbf2a8c1cfa4dc2ddd33975",
"text": "Many recent state-of-the-art recommender systems such as D-ATT, TransNet and DeepCoNN exploit reviews for representation learning. This paper proposes a new neural architecture for recommendation with reviews. Our model operates on a multi-hierarchical paradigm and is based on the intuition that not all reviews are created equal, i.e., only a selected few are important. The importance, however, should be dynamically inferred depending on the current target. To this end, we propose a review-by-review pointer-based learning scheme that extracts important reviews from user and item reviews and subsequently matches them in a word-by-word fashion. This enables not only the most informative reviews to be utilized for prediction but also a deeper word-level interaction. Our pointer-based method operates with a gumbel-softmax based pointer mechanism that enables the incorporation of discrete vectors within differentiable neural architectures. Our pointer mechanism is co-attentive in nature, learning pointers which are co-dependent on user-item relationships. Finally, we propose a multi-pointer learning scheme that learns to combine multiple views of user-item interactions. We demonstrate the effectiveness of our proposed model via extensive experiments on 24 benchmark datasets from Amazon and Yelp. Empirical results show that our approach significantly outperforms existing state-of-the-art models, with up to 19% and 71% relative improvement when compared to TransNet and DeepCoNN respectively. We study the behavior of our multi-pointer learning mechanism, shedding light on 'evidence aggregation' patterns in review-based recommender systems.",
"title": ""
},
{
"docid": "293e1834eef415f08e427a41e78d818f",
"text": "Autonomous robots are complex systems that require the interaction between numerous heterogeneous components (software and hardware). Because of the increase in complexity of robotic applications and the diverse range of hardware, robotic middleware is designed to manage the complexity and heterogeneity of the hardware and applications, promote the integration of new technologies, simplify software design, hide the complexity of low-level communication and the sensor heterogeneity of the sensors, improve software quality, reuse robotic software infrastructure across multiple research efforts, and to reduce production costs. This paper presents a literature survey and attribute-based bibliography of the current state of the art in robotic middleware design. The main aim of the survey is to assist robotic middleware researchers in evaluating the strengths and weaknesses of current approaches and their appropriateness for their applications. Furthermore, we provide a comprehensive set of appropriate bibliographic references that are classified based on middleware attributes.",
"title": ""
},
{
"docid": "f898a6d7e3a5e9cced5b9da69ef40204",
"text": "Software readability is a property that influences how easily a given piece of code can be read and understood. Since readability can affect maintainability, quality, etc., programmers are very concerned about the readability of code. If automatic readability checkers could be built, they could be integrated into development tool-chains, and thus continually inform developers about the readability level of the code. Unfortunately, readability is a subjective code property, and not amenable to direct automated measurement. In a recently published study, Buse et al. asked 100 participants to rate code snippets by readability, yielding arguably reliable mean readability scores of each snippet; they then built a fairly complex predictive model for these mean scores using a large, diverse set of directly measurable source code properties. We build on this work: we present a simple, intuitive theory of readability, based on size and code entropy, and show how this theory leads to a much sparser, yet statistically significant, model of the mean readability scores produced in Buse's studies. Our model uses well-known size metrics and Halstead metrics, which are easily extracted using a variety of tools. We argue that this approach provides a more theoretically well-founded, practically usable, approach to readability measurement.",
"title": ""
},
{
"docid": "db54908608579efd067853fed5d3e4e8",
"text": "The detection of moving objects from stationary cameras is usually approached by background subtraction, i.e. by constructing and maintaining an up-to-date model of the background and detecting moving objects as those that deviate from such a model. We adopt a previously proposed approach to background subtraction based on self-organization through artificial neural networks, that has been shown to well cope with several of the well known issues for background maintenance. Here, we propose a spatial coherence variant to such approach to enhance robustness against false detections and formulate a fuzzy model to deal with decision problems typically arising when crisp settings are involved. We show through experimental results and comparisons that higher accuracy values can be reached for color video sequences that represent typical situations critical for moving object detection.",
"title": ""
},
{
"docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5",
"text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.",
"title": ""
},
{
"docid": "e0140fa65c44d867a1d128d45fdc40e3",
"text": "Recursion is an important topic in computer science curricula. It is related to the acquisition of competences regarding problem decomposition, functional abstraction and the concept of induction. In comparison with direct recursion, mutual recursion is considered to be more complex. Consequently, it is generally addressed superficially in CS1/2 programming courses and textbooks. We show that, when a problem is approached appropriately, not only can mutual recursion be a powerful tool, but it can also be easy to understand and fun. This paper provides several intuitive and attractive algorithms that rely on mutual recursion, and which have been designed to help strengthen students' ability to decompose problems and apply induction. Furthermore, we show that a solution based on mutual recursion may be easier to design, prove and comprehend than other solutions based on direct recursion. We have evaluated the use of these algorithms while teaching recursion concepts. Results suggest that mutual recursion, in comparison with other types of recursion, is not as hard as it seems when: (1) determining the result of a (mathematical) function call, and, most importantly, (2) designing algorithms for solving simple problems.",
"title": ""
},
{
"docid": "31dbf3fcd1a70ad7fb32fb6e69ef88e3",
"text": "OBJECTIVE\nHealth care researchers have not taken full advantage of the potential to effectively convey meaning in their multivariate data through graphical presentation. The aim of this paper is to translate knowledge from the fields of analytical chemistry, toxicology, and marketing research to the field of medicine by introducing the radar plot, a useful graphical display method for multivariate data.\n\n\nSTUDY DESIGN AND SETTING\nDescriptive study based on literature review.\n\n\nRESULTS\nThe radar plotting technique is described, and examples are used to illustrate not only its programming language, but also the differences in tabular and bar chart approaches compared to radar-graphed data displays.\n\n\nCONCLUSION\nRadar graphing, a form of radial graphing, could have great utility in the presentation of health-related research, especially in situations in which there are large numbers of independent variables, possibly with different measurement scales. This technique has particular relevance for researchers who wish to illustrate the degree of multiple-group similarity/consensus, or group differences on multiple variables in a single graphical display.",
"title": ""
},
{
"docid": "68e3a910cd0f4131500bc808a1ac040d",
"text": "With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity.",
"title": ""
},
{
"docid": "5365f6f5174c3d211ea562c8a7fa0aab",
"text": "Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good cryptographic primitives. In this talk, we will first introduce the ba- sics of GANs and then discuss the fundamental statistical question about GANs — assuming the training can succeed with polynomial samples, can we have any statistical guarantees for the estimated distributions? In the work with Arora, Ge, Liang, and Zhang, we suggested a dilemma: powerful discriminators cause overfitting, whereas weak discriminators cannot detect mode collapse. Such a conundrum may be solved or alleviated by designing discrimina- tor class with strong distinguishing power against the particular generator class (instead of against all possible generators.)",
"title": ""
},
{
"docid": "22be2a234b9211cefc713be861862d82",
"text": "BACKGROUND\nA new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs.\n\n\nMETHOD\nEqual number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors.\n\n\nRESULT\nThe proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration.\n\n\nCONCLUSION\nThe proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.",
"title": ""
},
{
"docid": "4c9313e27c290ccc41f3874108593bf6",
"text": "Very few standards exist for fitting products to people. Footwear is a noteworthy example. This study is an attempt to evaluate the quality of footwear fit using two-dimensional foot outlines. Twenty Hong Kong Chinese students participated in an experiment that involved three pairs of dress shoes and one pair of athletic shoes. The participants' feet were scanned using a commercial laser scanner, and each participant wore and rated the fit of each region of each shoe. The shoe lasts were also scanned and were used to match the foot scans with the last scans. The ANOVA showed significant (p < 0.05) differences among the four pairs of shoes for the overall, fore-foot and rear-foot fit ratings. There were no significant differences among shoes for mid-foot fit rating. These perceived differences were further analysed after matching the 2D outlines of both last and feet. The point-wise dimensional difference between foot and shoe outlines were computed and analysed after normalizing with foot perimeter. The dimensional difference (DD) plots along the foot perimeter showed that fore-foot fit was strongly correlated (R(2) > 0.8) with two of the minimums in the DD-plot while mid-foot fit was strongly correlated (R(2) > 0.9) with the dimensional difference around the arch region and a point on the lateral side of the foot. The DD-plots allow the designer to determine the critical locations that may affect footwear fit in addition to quantifying the nature of misfit so that design changes to shape and material may be possible.",
"title": ""
},
{
"docid": "dfac485205134103cb66b07caa6fbaf0",
"text": "Electrical responses of the single muscle fibre (SFER) by stimulation of the motor terminal nerve-endings have been investigated in normal subjects at various ages in vivo. Shape, latency, rise-time and interspike distance seem to be SFER's most interesting parameters of the functional organisation of the motor subunits and their terminal fractions. \"Time\" parameters of SFER are in agreement with the anatomo-functional characteristics of the excited tissues during ageing.",
"title": ""
},
{
"docid": "2ab1f2d0ca28851dcc36721686a06fa2",
"text": "A quarter-century ago visual neuroscientists had little information about the number and organization of retinotopic maps in human visual cortex. The advent of functional magnetic resonance imaging (MRI), a non-invasive, spatially-resolved technique for measuring brain activity, provided a wealth of data about human retinotopic maps. Just as there are differences amongst non-human primate maps, the human maps have their own unique properties. Many human maps can be measured reliably in individual subjects during experimental sessions lasting less than an hour. The efficiency of the measurements and the relatively large amplitude of functional MRI signals in visual cortex make it possible to develop quantitative models of functional responses within specific maps in individual subjects. During this last quarter-century, there has also been significant progress in measuring properties of the human brain at a range of length and time scales, including white matter pathways, macroscopic properties of gray and white matter, and cellular and molecular tissue properties. We hope the next 25years will see a great deal of work that aims to integrate these data by modeling the network of visual signals. We do not know what such theories will look like, but the characterization of human retinotopic maps from the last 25years is likely to be an important part of future ideas about visual computations.",
"title": ""
},
{
"docid": "8a9191c256f62b7efce93033752059e6",
"text": "Food products fermented by lactic acid bacteria have long been used for their proposed health promoting properties. In recent years, selected probiotic strains have been thoroughly investigated for specific health effects. Properties like relief of lactose intolerance symptoms and shortening of rotavirus diarrhoea are now widely accepted for selected probiotics. Some areas, such as the treatment and prevention of atopy hold great promise. However, many proposed health effects still need additional investigation. In particular the potential benefits for the healthy consumer, the main market for probiotic products, requires more attention. Also, the potential use of probiotics outside the gastrointestinal tract deserves to be explored further. Results from well conducted clinical studies will expand and increase the acceptance of probiotics for the treatment and prevention of selected diseases.",
"title": ""
},
{
"docid": "7af567a60ce0bc0a67d7431184ac54ac",
"text": "Users of social media sites like Facebook and Twitter rely on crowdsourced content recommendation systems (e.g., Trending Topics) to retrieve important and useful information.",
"title": ""
},
{
"docid": "a8ce9987f4841265946479b86c218313",
"text": "Sensorless control of a permanent-magnet synchronous machine at low and zero speed is based on the injection of an oscillating high-frequency carrier signal. A particular demodulation technique serves to eliminate the estimation error introduced by pulsewidth modulation delay and the nonlinear characteristics of the inverter. Before the drive is started, the initial rotor position and the magnet polarity are detected. The initialization is performed by injecting an AC carrier and two short current pulses in a sequence.",
"title": ""
},
{
"docid": "a66cc5179dd276acd4d49dd32e3fe9df",
"text": "Improving student achievement is vital for our nation’s competitiveness. Scientific research shows how the physical classroom environment influences student achievement. Two findings are key: First, the building’s structural facilities profoundly influence learning. Inadequate lighting, noise, low air quality, and deficient heating in the classroom are significantly related to worse student achievement. Over half of U.S. schools have inadequate structural facilities, and students of color and lower income students are more likely to attend schools with inadequate structural facilities. Second, scientific studies reveal the unexpected importance of a classroom’s symbolic features, such as objects and wall décor, in influencing student learning and achievement in that environment. Symbols inform students whether they are valued learners and belong within the classroom, with far-reaching consequences for students’ educational choices and achievement. We outline policy implications of the scientific findings—noting relevant policy audiences—and specify critical features of classroom design that can improve student achievement, especially for the most vulnerable students.",
"title": ""
}
] |
scidocsrr
|
3308d88c3d0739510c8647911d2d6f1c
|
Understand Functionality and Dimensionality of Vector Embeddings : the Distributional Hypothesis , the Pairwise Inner Product Loss and Its Bias-Variance Trade
|
[
{
"docid": "a3aad879ca5f7e7683c1377e079c4726",
"text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods including Vector Space Methods (VSMs) such as Latent Semantic Analysis (LSA), generative text models such as topic models, matrix factorization, neural nets, and energy-based models. Many of these use nonlinear operations on co-occurrence statistics, such as computing Pairwise Mutual Information (PMI). Some use hand-tuned hyperparameters and term reweighting. Often a generative model can help provide theoretical insight into such modeling choices, but there appears to be no such model to “explain” the above nonlinear models. For example, we know of no generative model for which the correct solution is the usual (dimension-restricted) PMI model. This paper gives a new generative model, a dynamic version of the loglinear topic model of Mnih and Hinton (2007), as well as a pair of training objectives called RAND-WALK to compute word embeddings. The methodological novelty is to use the prior to compute closed form expressions for word statistics. These provide an explanation for the PMI model and other recent models, as well as hyperparameter choices. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are spatially isotropic. The model also helps explain why linear algebraic structure arises in low-dimensional semantic embeddings. Such structure has been used to solve analogy tasks by Mikolov et al. (2013a) and many subsequent papers. This theoretical explanation is to give an improved analogy solving method that improves success rates on analogy solving by a few percent.",
"title": ""
}
] |
[
{
"docid": "984b2f763a14331c5da36cd08f7482de",
"text": "This review of 68 studies compares the methodologies used for the identification and quantification of microplastics from the marine environment. Three main sampling strategies were identified: selective, volume-reduced, and bulk sampling. Most sediment samples came from sandy beaches at the high tide line, and most seawater samples were taken at the sea surface using neuston nets. Four steps were distinguished during sample processing: density separation, filtration, sieving, and visual sorting of microplastics. Visual sorting was one of the most commonly used methods for the identification of microplastics (using type, shape, degradation stage, and color as criteria). Chemical and physical characteristics (e.g., specific density) were also used. The most reliable method to identify the chemical composition of microplastics is by infrared spectroscopy. Most studies reported that plastic fragments were polyethylene and polypropylene polymers. Units commonly used for abundance estimates are \"items per m(2)\" for sediment and sea surface studies and \"items per m(3)\" for water column studies. Mesh size of sieves and filters used during sampling or sample processing influence abundance estimates. Most studies reported two main size ranges of microplastics: (i) 500 μm-5 mm, which are retained by a 500 μm sieve/net, and (ii) 1-500 μm, or fractions thereof that are retained on filters. We recommend that future programs of monitoring continue to distinguish these size fractions, but we suggest standardized sampling procedures which allow the spatiotemporal comparison of microplastic abundance across marine environments.",
"title": ""
},
{
"docid": "706e2131c7ebcde981e140241420116f",
"text": "Most commonly used distributed machine learning systems are either synchronous or centralized asynchronous. Synchronous algorithms like AllReduceSGD perform poorly in a heterogeneous environment, while asynchronous algorithms using a parameter server suffer from 1) communication bottleneck at parameter servers when workers are many, and 2) significantly worse convergence when the traffic to parameter server is congested. Can we design an algorithm that is robust in a heterogeneous environment, while being communication efficient and maintaining the best-possible convergence rate? In this paper, we propose an asynchronous decentralized stochastic gradient decent algorithm (AD-PSGD) satisfying all above expectations. Our theoretical analysis shows AD-PSGD converges at the optimal O(1/ √ K) rate as SGD and has linear speedup w.r.t. number of workers. Empirically, ADPSGD outperforms the best of decentralized parallel SGD (D-PSGD), asynchronous parallel SGD (APSGD), and standard data parallel SGD (AllReduceSGD), often by orders of magnitude in a heterogeneous environment. When training ResNet-50 on ImageNet with up to 128 GPUs, AD-PSGD converges (w.r.t epochs) similarly to the AllReduce-SGD, but each epoch can be up to 4-8× faster than its synchronous counterparts in a network-sharing HPC environment.",
"title": ""
},
{
"docid": "fb31ead676acdd048d699ddfb4ddd17a",
"text": "Software defects prediction aims to reduce software testing efforts by guiding the testers through the defect classification of software systems. Defect predictors are widely used in many organizations to predict software defects in order to save time, improve quality, testing and for better planning of the resources to meet the timelines. The application of statistical software testing defect prediction model in a real life setting is extremely difficult because it requires more number of data variables and metrics and also historical defect data to predict the next releases or new similar type of projects. This paper explains our statistical model, how it will accurately predict the defects for upcoming software releases or projects. We have used 20 past release data points of software project, 5 parameters and build a model by applying descriptive statistics, correlation and multiple linear regression models with 95% confidence intervals (CI). In this appropriate multiple linear regression model the R-square value was 0.91 and its Standard Error is 5.90%. The Software testing defect prediction model is now being used to predict defects at various testing projects and operational releases. We have found 90.76% precision between actual and predicted defects.",
"title": ""
},
{
"docid": "69561d0f42cf4aae73d4c97c1871739e",
"text": "Recent methods based on 3D skeleton data have achieved outstanding performance due to its conciseness, robustness, and view-independent representation. With the development of deep learning, Convolutional Neural Networks (CNN) and Long Short Term Memory (LSTM)-based learning methods have achieved promising performance for action recognition. However, for CNN-based methods, it is inevitable to loss temporal information when a sequence is encoded into images. In order to capture as much spatial-temporal information as possible, LSTM and CNN are adopted to conduct effective recognition with later score fusion. In addition, experimental results show that the score fusion between CNN and LSTM performs better than that between LSTM and LSTM for the same feature. Our method achieved state-of-the-art results on NTU RGB+D datasets for 3D human action analysis. The proposed method achieved 87.40% in terms of accuracy and ranked 1st place in Large Scale 3D Human Activity Analysis Challenge in Depth Videos.",
"title": ""
},
{
"docid": "9c4c13c38e2b96aa3141b1300ca356c6",
"text": "Awareness plays a major role in human cognition and adaptive behaviour, though mechanisms involved remain unknown. Awareness is not an objectively established fact, therefore, despite extensive research, scientists have not been able to fully interpret its contribution in multisensory integration and precise neural firing, hence, questions remain: (1) How the biological neuron integrates the incoming multisensory signals with respect to different situations? (2) How are the roles of incoming multisensory signals defined (selective amplification or attenuation) that help neuron(s) to originate a precise neural firing complying with the anticipated behavioural-constraint of the environment? (3) How are the external environment and anticipated behaviour integrated? Recently, scientists have exploited deep learning architectures to integrate multimodal cues and capture context-dependent meanings. Yet, these methods suffer from imprecise behavioural representation and a limited understanding of neural circuitry or underlying information processing mechanisms with respect to the outside world. In this research, we introduce a new theory on the role of awareness and universal context that can help answering the aforementioned crucial neuroscience questions. Specifically, we propose a class of spiking conscious neuron in which the output depends on three functionally distinctive integrated input variables: receptive field (RF), local contextual field (LCF), and universal contextual field (UCF) a newly proposed dimension. The RF defines the incoming ambiguous sensory signal, LCF defines the modulatory sensory signal coming from other parts of the brain, and UCF defines the awareness. It is believed that the conscious neuron inherently contains enough knowledge about the situation in which the problem is to be solved based on past learning and reasoning and it defines the precise role of incoming multisensory signals (amplification or attenuation) to originate a precise neural firing (exhibiting switch-like behaviour). It is shown, when implemented within an SCNN, the conscious neuron helps modelling a more precise human behaviour e.g., when exploited to model human audiovisual speech processing, the SCNN performed comparably to deep long-short-term memory (LSTM) network. We believe that the proposed theory could be applied to address a range of real-world problems including elusive neural disruptions, explainable artificial intelligence, human-like computing, low-power neuromorphic chips etc.",
"title": ""
},
{
"docid": "5a7ca2cab0162e49809723e75f9bdef5",
"text": "Gene expression is inherently stochastic; precise gene regulation by transcription factors is important for cell-fate determination. Many transcription factors regulate their own expression, suggesting that autoregulation counters intrinsic stochasticity in gene expression. Using a new strategy, cotranslational activation by cleavage (CoTrAC), we probed the stochastic expression dynamics of cI, which encodes the bacteriophage λ repressor CI, a fate-determining transcription factor. CI concentration fluctuations influence both lysogenic stability and induction of bacteriophage λ. We found that the intrinsic stochasticity in cI expression was largely determined by CI expression level irrespective of autoregulation. Furthermore, extrinsic, cell-to-cell variation was primarily responsible for CI concentration fluctuations, and negative autoregulation minimized CI concentration heterogeneity by counteracting extrinsic noise and introducing memory. This quantitative study of transcription factor expression dynamics sheds light on the mechanisms cells use to control noise in gene regulatory networks.",
"title": ""
},
{
"docid": "c5a225211a7240da086299e45bddf6e3",
"text": "This communication presents a technique to re-direct the radiation beam from a planar antenna in a specific direction with the inclusion of metamaterial loading. The beam-tilting approach described here uses the phenomenon based on phase change resulting from an EM wave entering a medium of different refractive index. The metamaterial H-shaped unit-cell structure is configured to provide a high refractive index which was used to implement beam tilting in a bow-tie antenna. The fabricated unit-cell was first characterized by measuring its S-parameters. Hence, a two dimensional array was constructed using the proposed unit-cell to create a region of high refractive index which was implemented in the vicinity bow-tie structure to realize beam-tilting. The simulation and experimental results show that the main beam of the antenna in the E-plane is tilted by 17 degrees with respect to the end-fire direction at 7.3, 7.5, and 7.7 GHz. Results also show unlike conventional beam-tilting antennas, no gain drop is observed when the beam is tilted; in fact there is a gain enhancement of 2.73 dB compared to the original bow-tie antenna at 7.5 GHz. The reflection-coeflicient of the antenna remains <; - 10 dB in the frequency range of operation.",
"title": ""
},
{
"docid": "2032b497d966119b3ba45d97ce1dfb31",
"text": "OBJECTIVES\nPatients presenting with neck mass are challenging for many otolaryngologists. If a mass on the lower lateral neck exists with swallowing and disappears after swallowing, it has been diagnosed as an omohyoid syndrome in most literature. The mechanism of sternohyoid syndrome has not been proven or investigated before. We investigated sternohyoid syndrome, commonly misdiagnosed as an omohyoid syndrome.\n\n\nMETHODS AND PATIENTS\nTwo patients were investigated. Outpatient photography, computed tomography and operating findings were reviewed. We found that the sternohyoid muscle was inserted at an abnormal site, the midportion of the clavicle. There was no abnormality of other muscles. We also reviewed all literature that previously diagnosed this condition as an omohyoid syndrome.\n\n\nRESULTS\nThere was no literature about sternohyoid syndrome. We found that the abnormal muscle is a sternohyoid muscle and not omohyoid muscle. The color of the left sternohyoid muscle was dark red, and the fascia covering the muscle was denuded. The muscle had lost elasticity and moved abnormally.\n\n\nCONCLUSION\nOur patients did not have omohyoid syndrome. The symptoms of omohyoid syndrome are the same as sternohyoid syndrome but the problematic muscle is different. This is the first known report diagnosing sternohyoid syndrome, and should be a consideration in the diagnosis of a lateral neck mass.",
"title": ""
},
{
"docid": "da5027a5edd2d4d0dc447a97534c99ba",
"text": "Many distributed services are hosted at large, shared, geographically diverse data centers, and they use replicatio n to achieve high availability despite the failure of an entire d ata center. Recent events show that non-crash faults occur in th ese services and may lead to long outages. While Byzantine Fault Tolerance (BFT) could be used to withstand these faults, cur rent BFT protocols can become unavailable if a small fraction of their replicas are unreachable. This is because exis ting BFT protocols favor strong safety guarantees (consiste ncy) over liveness (availability). This paper presents a novel BFT state machine replication protocol called Zeno, that trades consistency for high er availability. In particular, Zeno replaces linearizabili ty with eventual consistency , where clients can temporarily miss each other’s updates but when the network is stable the states fro m the individual partitions are merged by having the replicas agree on a total order for the requests. We have built a protot ype of Zeno and our evaluation using micro-benchmarks shows tha t Zeno provides better availability than traditional BFT pro tocols, and that its impact on performance is low, even when par titions occur or heal.",
"title": ""
},
{
"docid": "314ffaaf39e2345f90e85fc5c5fdf354",
"text": "With the fast development pace of deep submicron technology, the size and density of semiconductor memory grows rapidly. However, keeping a high level of yield and reliability for memory products is more and more difficult. Both the redundancy repair and ECC techniques have been widely used for enhancing the yield and reliability of memory chips. Specifically, the redundancy repair and ECC techniques are conventionally used to repair or correct the hard faults and soft errors, respectively. In this paper, we propose an integrated ECC and redundancy repair scheme for memory reliability enhancement. Our approach can identify the hard faults and soft errors during the memory normal operation mode, and repair the hard faults during the memory idle time as long as there are unused redundant elements. We also develop a method for evaluating the memory reliability. Experimental results show that the proposed approach is effective, e.g., the MTTF of a 32K /spl times/ 64 memory is improved by 1.412 hours (7.1%) with our integrated ECC and repair scheme.",
"title": ""
},
{
"docid": "fceb43462f77cf858ef9747c1c5f0728",
"text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.",
"title": ""
},
{
"docid": "19e7b6c34c763952112c8492450de2b5",
"text": "Handling intellectual property involves the cognitive process of understanding the innovation described in the body of patent claims. In this paper we present an on-going project on a multi-level text simplification to assist experts in this complex task. Two levels of simplification procedure are described. The macro-level simplification results in the visualization of the hierarchy of multiple claims. The micro-level simplification includes visualization of the claim terminology, decomposition of the claim complex structure into a set of simple sentences and building a graph explicitly showing the interrelations of the invention elements. The methodology is implemented in an experimental text simplifying computer system. The motivation underlying this research is to develop tools that could increase the overall productivity of human users and machines in processing patent applications.",
"title": ""
},
{
"docid": "c5ca0bce645a6d460ca3e01e4150cce5",
"text": "The technological advancement and sophistication in cameras and gadgets prompt researchers to have focus on image analysis and text understanding. The deep learning techniques demonstrated well to assess the potential for classifying text from natural scene images as reported in recent years. There are variety of deep learning approaches that prospects the detection and recognition of text, effectively from images. In this work, we presented Arabic scene text recognition using Convolutional Neural Networks (ConvNets) as a deep learning classifier. As the scene text data is slanted and skewed, thus to deal with maximum variations, we employ five orientations with respect to single occurrence of a character. The training is formulated by keeping filter size 3 × 3 and 5 × 5 with stride value as 1 and 2. During text classification phase, we trained network with distinct learning rates. Our approach reported encouraging results on recognition of Arabic characters from segmented Arabic scene images.",
"title": ""
},
{
"docid": "7a9572c3c74f9305ac0d817b04e4399a",
"text": "Due to the limited length and freely constructed sentence structures, it is a difficult classification task for short text classification. In this paper, a short text classification framework based on Siamese CNNs and few-shot learning is proposed. The Siamese CNNs will learn the discriminative text encoding so as to help classifiers distinguish those obscure or informal sentence. The different sentence structures and different descriptions of a topic are viewed as ‘prototypes’, which will be learned by few-shot learning strategy to improve the classifier’s generalization. Our experimental results show that the proposed framework leads to better results in accuracies on twitter classifications and outperforms some popular traditional text classification methods and a few deep network approaches.",
"title": ""
},
{
"docid": "619165e7f74baf2a09271da789e724df",
"text": "MOST verbal communication occurs in contexts where the listener can see the speaker as well as hear him. However, speech perception is normally regarded as a purely auditory process. The study reported here demonstrates a previously unrecognised influence of vision upon speech perception. It stems from an observation that, on being shown a film of a young woman's talking head, in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga], normal adults reported hearing [da]. With the reverse dubbing process, a majority reported hearing [bagba] or [gaba]. When these subjects listened to the soundtrack from the film, without visual input, or when they watched untreated film, they reported the syllables accurately as repetitions of [ba] or [ga]. Subsequent replications confirm the reliability of these findings; they have important implications for the understanding of speech perception.",
"title": ""
},
{
"docid": "a6f2cee851d2c22d471f473caf1710a1",
"text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.",
"title": ""
},
{
"docid": "7eca03a9a5765ae0e234f74f9ef5cb4c",
"text": "In agile processes like Scrum, strong customer involvement demands for techniques to facilitate the requirements analysis and acceptance testing. Additionally, test automation is crucial, as incremental development and continuous integration require high efforts for testing. To cope with these challenges, we propose a modelbased technique for documenting customer’s requirements in forms of test models. These can be used by the developers as requirements specification and by the testers for acceptance testing. The modeling languages we use are light-weight and easy-to-learn. From the test models, we generate test scripts for FitNesse or Selenium which are well-established test automation tools in agile community.",
"title": ""
},
{
"docid": "cd8c1c24d4996217c8927be18c48488f",
"text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.",
"title": ""
},
{
"docid": "6386231b83b779db1066ee870d26c5f8",
"text": "Video gaming is a firmly established leisure pursuit, which continues to grow in popularity. This paper is an examination of what motivates people to play computer games, and the relevance of such factors to the positive and negative aspects of computer gaming. When all of an individual’s motivations to play video games are for the pursuit of ‘fun’, it is said that an intrinsic motivation is the most prevalent motivation. However, the primary motivation for playing video games among periodic gamers is different from the primary motivation of regular gamers: periodic gamers are driven by extrinsic motivation, whereas regular gamers are driven by intrinsic motivation. The pursuit of a challenge is the prevalent motivation reported by regular gamers of both genders. The Theory of Flow Experience, and the Attribution Theory have contributed to the understanding of why games may provide a safe medium, in which to learn about the consequences of actions through experience. Computer games may facilitate the development of self-monitoring and coping mechanisms. If the avoidance or escape from other activities is the primary motivation for playing video games, there tends to be an increased risk of engaging in addiction-related behaviours. This paper reports on the findings of previous research (into the motivations for playing computer games), and on industry reports containing data relating to gamer motivations. The aim is to build a picture of what motivates people to play computer games, and how motivation is associated with the main positive and negative aspects of computer gaming.",
"title": ""
},
{
"docid": "072d187f56635ebc574f2eedb8a91d14",
"text": "With the development of location-based social networks, an increasing amount of individual mobility data accumulate over time. The more mobility data are collected, the better we can understand the mobility patterns of users. At the same time, we know a great deal about online social relationships between users, providing new opportunities for mobility prediction. This paper introduces a noveltyseeking driven predictive framework for mining location-based social networks that embraces not only a bunch of Markov-based predictors but also a series of location recommendation algorithms. The core of this predictive framework is the cooperation mechanism between these two distinct models, determining the propensity of seeking novel and interesting locations.",
"title": ""
}
] |
scidocsrr
|
4c174de690a0c5e265cac8822dabbd72
|
Analysis and Observations From the First Amazon Picking Challenge
|
[
{
"docid": "f4abfe0bb969e2a6832fa6317742f202",
"text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.",
"title": ""
}
] |
[
{
"docid": "7a86d9e19930ce5af78431a52bb75728",
"text": "Mapping Relational Databases (RDB) to RDF is an active field of research. The majority of data on the current Web is stored in RDBs. Therefore, bridging the conceptual gap between the relational model and RDF is needed to make the data available on the Semantic Web. In addition, recent research has shown that Semantic Web technologies are useful beyond the Web, especially if data from different sources has to be exchanged or integrated. Many mapping languages and approaches were explored leading to the ongoing standardization effort of the World Wide Web Consortium (W3C) carried out in the RDB2RDF Working Group (WG). The goal and contribution of this paper is to provide a feature-based comparison of the state-of-the-art RDB-to-RDF mapping languages. It should act as a guide in selecting a RDB-to-RDF mapping language for a given application scenario and its requirements w.r.t. mapping features. Our comparison framework is based on use cases and requirements for mapping RDBs to RDF as identified by the RDB2RDF WG. We apply this comparison framework to the state-of-the-art RDB-to-RDF mapping languages and report the findings in this paper. As a result, our classification proposes four categories of mapping languages: direct mapping, read-only general-purpose mapping, read-write general-purpose mapping, and special-purpose mapping. We further provide recommendations for selecting a mapping language.",
"title": ""
},
{
"docid": "058a0e93ef468685a535c6e2a25434fc",
"text": "OFDM (Orthogonal Frequency Division Multiplexing) has been widely adopted for high data rate wireless communication systems due to its advantages such as extraordinary spectral efficiency, robustness to channel fading and better QoS (Quality of Service) performance for multiple users. However, some challenging issues are still unresolved in OFDM systems. One of the issues is the high PAPR (peak-toaverage power ratio), which results in nonlinearity in power amplifiers, and causes out of band radiation and in band distortion. This paper reviews some conventional PAPR reduction techniques and their modifications to achieve better PAPR performance. Advantages and disadvantages of each technique are discussed in detail. And comparisons between different techniques are also presented. Finally, this paper makes a prospect forecast about the direction for further researches in the area of PAPR reduction for OFDM signals.",
"title": ""
},
{
"docid": "01472364545392cad69b9c7e1f65f4bb",
"text": "The designing of power transmission network is a difficult task due to the complexity of power system. Due to complexity in the power system there is always a loss of the stability due to the fault. Whenever a fault is intercepted in system, the whole system goes to severe transients. These transients cause oscillation in phase angle which leads poor power quality. The nature of oscillation is increasing instead being sustained, which leads system failure in form of generator damage. To reduce and eliminate the unstable oscillations one needs to use a stabilizer which can generate a perfect compensatory signal in order to minimize the harmonics generated due to instability. This paper presents a Power System stabilizer to reduce oscillations due to small signal disturbance. Additionally, a hybrid approach is proposed using FOPID stabilizer with the PSS connected SMIB. Genetic algorithm (GA), Particle swarm optimization (PSO) and Grey Wolf Optimization (GWO) are used for the parameter tuning of the stabilizer. Reason behind the use of GA, PSO and GWO instead of conventional methods is that it search the parameter heuristically, which leads better results. The efficiency of proposed approach is observed by rotor angle and power angle deviations in the SMIB system.",
"title": ""
},
{
"docid": "a5f3b862a02fb26fa7b96ad0a10e762a",
"text": "Thesis for the degree of Doctor of Science (Technology) to be presented with due permission for the public examination and criticism in the Auditorium 1382 at High dynamic performance of an electric motor is a fundamental prerequisite in motion control applications, also known as servo drives. Recent developments in the field of microprocessors and power electronics have enabled faster and faster movements with an electric motor. In such a dynamically demanding application, the dimensioning of the motor differs substantially from the industrial motor design, where feasible characteristics of the motor are for example high efficiency, a high power factor, and a low price. In motion control instead, such characteristics as high overloading capability, high-speed operation, high torque density and low inertia are required. The thesis investigates how the dimensioning of a high-performance servomotor differs from the dimensioning of industrial motors. The two most common servomotor types are examined; an induction motor and a permanent magnet synchronous motor. The suitability of these two motor types in dynamically demanding servo applications is assessed, and the design aspects that optimize the servo characteristics of the motors are analyzed. Operating characteristics of a high performance motor are studied, and some methods for improvements are suggested. The main focus is on the induction machine, which is frequently compared to the permanent magnet synchronous motor. A 4 kW prototype induction motor was designed and manufactured for the verification of the simulation results in the laboratory conditions. Also a dynamic simulation model for estimating the thermal behaviour of the induction motor in servo applications was constructed. The accuracy of the model was improved by coupling it with the electromagnetic motor model in order to take into account the variations in the motor electromagnetic characteristics due to the temperature rise.",
"title": ""
},
{
"docid": "231287a073198d45375dae8856c36572",
"text": "We consider a setting in which two firms compete to spread rumors in a social network. Firms seed their rumors simultaneously and rumors propagate according to the linear threshold model. Consumers have (randomly drawn) heterogeneous thresholds for each product. Using the concept of cascade centrality introduced by [6], we provide a sharp characterization of networks in which games admit purestrategy Nash equilibria (PSNE). We provide tight bounds for the efficiency of these equilibria and for the inequality in firms' equilibrium payoffs. When the network is a tree, the model is particularly tractable.",
"title": ""
},
{
"docid": "53267e7e574dce749bb3d5877640e017",
"text": "After a decline in enthusiasm for national community health worker (CHW) programmes in the 1980s, these have re-emerged globally, particularly in the context of HIV. This paper examines the case of South Africa, where there has been rapid growth of a range of lay workers (home-based carers, lay counsellors, DOT supporters etc.) principally in response to an expansion in budgets and programmes for HIV, most recently the rollout of antiretroviral therapy (ART). In 2004, the term community health worker was introduced as the umbrella concept for all the community/lay workers in the health sector, and a national CHW Policy Framework was adopted. We summarize the key features of the emerging national CHW programme in South Africa, which include amongst others, their integration into a national public works programme and the use of non-governmental organizations as intermediaries. We then report on experiences in one Province, Free State. Over a period of 2 years (2004--06), we made serial visits on three occasions to the first 16 primary health care facilities in this Province providing comprehensive HIV services, including ART. At each of these visits, we did inventories of CHW numbers and training, and on two occasions conducted facility-based group interviews with CHWs (involving a total of 231 and 182 participants, respectively). We also interviewed clinic nurses tasked with supervising CHWs. From this evaluation we concluded that there is a significant CHW presence in the South African health system. This infrastructure, however, shares many of the managerial challenges (stability, recognition, volunteer vs. worker, relationships with professionals) associated with previous national CHW programmes, and we discuss prospects for sustainability in the light of the new policy context.",
"title": ""
},
{
"docid": "8c07982729ca439c8e346cbe018a7198",
"text": "The need for diversification manifests in various recommendation use cases. In this work, we propose a novel approach to diversifying a list of recommended items, which maximizes the utility of the items subject to the increase in their diversity. From a technical perspective, the problem can be viewed as maximization of a modular function on the polytope of a submodular function, which can be solved optimally by a greedy method. We evaluate our approach in an offline analysis, which incorporates a number of baselines and metrics, and in two online user studies. In all the experiments, our method outperforms the baseline methods.",
"title": ""
},
{
"docid": "7d6e19b5a6db447ab4ea5df012d43da9",
"text": "The ARPANET routing metric was revised in July 1987, resulting in substantial performance improvements, especially in terms of user delay and effective network capacity. These revisions only affect the individual link costs (or metrics) on which the PSN (packet switching node) bases its routing decisions. They do not affect the SPF (“shortest path first”) algorithm employed to compute routes (installed in May 1979). The previous link metric was packet delay averaged over a ten second interval, which performed effectively under light-to-moderate traffic conditions. However, in heavily loaded networks it led to routing instabilities and wasted link and processor bandwidth.\nThe revised metric constitutes a move away from the strict delay metric: it acts similar to a delay-based metric under lightly loads and to a capacity-based metric under heavy loads. It will not always result in shortest-delay paths. Since the delay metric produced shortest-delay paths only under conditions of light loading, the revised metric involves giving up the guarantee of shortest-delay paths under light traffic conditions for the sake of vastly improved performance under heavy traffic conditions.",
"title": ""
},
{
"docid": "9b451aa93627d7b44acc7150a1b7c2d0",
"text": "BACKGROUND\nAerobic endurance exercise has been shown to improve higher cognitive functions such as executive control in healthy subjects. We tested the hypothesis that a 30-minute individually customized endurance exercise program has the potential to enhance executive functions in patients with major depressive disorder.\n\n\nMETHOD\nIn a randomized within-subject study design, 24 patients with DSM-IV major depressive disorder and 10 healthy control subjects performed 30 minutes of aerobic endurance exercise at 2 different workload levels of 40% and 60% of their predetermined individual 4-mmol/L lactic acid exercise capacity. They were then tested with 4 standardized computerized neuropsychological paradigms measuring executive control functions: the task switch paradigm, flanker task, Stroop task, and GoNogo task. Performance was measured by reaction time. Data were gathered between fall 2000 and spring 2002.\n\n\nRESULTS\nWhile there were no significant exercise-dependent alterations in reaction time in the control group, for depressive patients we observed a significant decrease in mean reaction time for the congruent Stroop task condition at the 60% energy level (p = .016), for the incongruent Stroop task condition at the 40% energy level (p = .02), and for the GoNogo task at both energy levels (40%, p = .025; 60%, p = .048). The exercise procedures had no significant effect on reaction time in the task switch paradigm or the flanker task.\n\n\nCONCLUSION\nA single 30-minute aerobic endurance exercise program performed by depressed patients has positive effects on executive control processes that appear to be specifically subserved by the anterior cingulate.",
"title": ""
},
{
"docid": "d8badd23313c7ea4baa0231ff1b44e32",
"text": "Current state-of-the-art solutions for motion capture from a single camera are optimization driven: they optimize the parameters of a 3D human model so that its re-projection matches measurements in the video (e.g. person segmentation, optical flow, keypoint detections etc.). Optimization models are susceptible to local minima. This has been the bottleneck that forced using clean green-screen like backgrounds at capture time, manual initialization, or switching to multiple cameras as input resource. In this work, we propose a learning based motion capture model for single camera input. Instead of optimizing mesh and skeleton parameters directly, our model optimizes neural network weights that predict 3D shape and skeleton configurations given a monocular RGB video. Our model is trained using a combination of strong supervision from synthetic data, and self-supervision from differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and (c) human-background segmentation, in an end-to-end framework. Empirically we show our model combines the best of both worlds of supervised learning and test-time optimization: supervised learning initializes the model parameters in the right regime, ensuring good pose and surface initialization at test time, without manual effort. Self-supervision by back-propagating through differentiable rendering allows (unsupervised) adaptation of the model to the test data, and offers much tighter fit than a pretrained fixed model. We show that the proposed model improves with experience and converges to low-error solutions where previous optimization methods fail.",
"title": ""
},
{
"docid": "29e702298bb2daefa1c419610d0e35f1",
"text": "ISPs are increasingly selling \"tiered\" contracts, which offer Internet connectivity to wholesale customers in bundles, at rates based on the cost of the links that the traffic in the bundle is traversing. Although providers have already begun to implement and deploy tiered pricing contracts, little is known about how to structure them. While contracts that sell connectivity on finer granularities improve market efficiency, they are also more costly for ISPs to implement and more difficult for customers to understand. Our goal is to analyze whether current tiered pricing practices in the wholesale transit market yield optimal profits for ISPs and whether better bundling strategies might exist. In the process, we deliver two contributions: 1) we develop a novel way of mapping traffic and topology data to a demand and cost model, and 2) we fit this model on three large real-world networks: an European transit ISP, a content distribution network, and an academic research network, and run counterfactuals to evaluate the effects of different bundling strategies. Our results show that the common ISP practice of structuring tiered contracts according to the cost of carrying the traffic flows (e.g., offering a discount for traffic that is local) can be suboptimal and that dividing contracts based on both traffic demand and the cost of carrying it into only three or four tiers yields near-optimal profit for the ISP.",
"title": ""
},
{
"docid": "74e4d1886594ecce6d60861bec6ac3d8",
"text": "From small voltage regulators to large motor drives, power electronics play a very important role in present day technology. The power electronics market is currently dominated by silicon based devices. However due to inherent limitations of silicon material they are approaching thermal limit in terms of high power and high temperature operation. Performance can only be improved with the development of new power devices with better material properties. Silicon Carbide devices are now gaining popularity as next generation semiconductor devices. Due to its inherent material properties such as high breakdown field, wide band gap, high electron saturation velocity, and high thermal conductivity, they serve as a better alternative to the silicon counterparts. Here an attempt is made to study the unique properties of SiC MOSFET and requirements for designing a gate drive circuit for the same. The switching characteristics of SCH2080KE are analyzed using LTspice by performing double pulse test. Also driver circuit is designed for SiC MOSFET SCH2080KE and its performance is tested by implementing a buck converter.",
"title": ""
},
{
"docid": "f4616ce19907f8502fb7520da68c6852",
"text": "Mining outliers in database is to find exceptional objects that deviate from the rest of the data set. Besides classical outlier analysis algorithms, recent studies have focused on mining local outliers, i.e., the outliers that have density distribution significantly different from their neighborhood. The estimation of density distribution at the location of an object has so far been based on the density distribution of its k-nearest neighbors [2, 11]. However, when outliers are in the location where the density distributions in the neighborhood are significantly different, for example, in the case of objects from a sparse cluster close to a denser cluster, this may result in wrong estimation. To avoid this problem, here we propose a simple but effective measure on local outliers based on a symmetric neighborhood relationship. The proposed measure considers both neighbors and reverse neighbors of an object when estimating its density distribution. As a result, outliers so discovered are more meaningful. To compute such local outliers efficiently, several mining algorithms are developed that detects top-n outliers based on our definition. A comprehensive performance evaluation and analysis shows that our methods are not only efficient in the computation but also more effective in ranking outliers.",
"title": ""
},
{
"docid": "cce107dc268b2388e301f64718de1463",
"text": "The training of convolutional neural networks for image recognition usually requires large image datasets to produce favorable results. Those large datasets can be acquired by web crawlers that accumulate images based on keywords. Due to the nature of data in the web, these image sets display a broad variation of qualities across the contained items. In this work, a filtering approach for noisy datasets is proposed, utilizing a smaller trusted dataset. Hereby a convolutional neural network is trained on the trusted dataset and then used to construct a filtered subset from the noisy datasets. The methods described in this paper were applied to plant image classification and the created models have been submitted to the PlantCLEF 2017 competition.",
"title": ""
},
{
"docid": "6c8cb7160add3e0a21957b9e6ec8ecdd",
"text": "While innovation processes toward sustainable development (eco-innovations) have received increasing attention during the past years, theoretical and methodological approaches to analyze these processes are poorly developed. Against this background, the term eco-innovation is introduced in this paper addressing explicitly three kinds of changes towards sustainable development: technological, social and institutional innovation. Secondly, the potential contribution of neoclassical and (co-)evolutionary approaches from environmental and innovation economics to eco-innovation research is discussed. Three peculiarities of eco-innovation are identified: the double externality problem, the regulatory push/pull effect and the increasing importance of social and institutional innovation. While the first two are widely ignored in innovation economics, the third is at the least not elaborated appropriately. The consideration of these peculiarities may help to overcome market failure by establishing a specific eco-innovation policy and to avoid a ‘technology bias’ through a broader understanding of innovation. Finally, perspectives for a specific contribution of ecological economics to eco-innovation research are drawn. It is argued that methodological pluralism as established in ecological economics would be very beneficial for eco-innovation research. A theoretical framework integrating elements from both neoclassical and evolutionary approaches should be pursued in order to consider the complexity of factors influencing innovation decisions as well as the specific role of regulatory instruments. And the experience gathered in ecological economics integrating ecological, social and economic aspects of sustainable development is highly useful for opening up innovation research to social and institutional changes. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "9e804b49534bedcde2611d70c40b255d",
"text": "PURPOSE\nScreening tool of older people's prescriptions (STOPP) and screening tool to alert to right treatment (START) criteria were first published in 2008. Due to an expanding therapeutics evidence base, updating of the criteria was required.\n\n\nMETHODS\nWe reviewed the 2008 STOPP/START criteria to add new evidence-based criteria and remove any obsolete criteria. A thorough literature review was performed to reassess the evidence base of the 2008 criteria and the proposed new criteria. Nineteen experts from 13 European countries reviewed a new draft of STOPP & START criteria including proposed new criteria. These experts were also asked to propose additional criteria they considered important to include in the revised STOPP & START criteria and to highlight any criteria from the 2008 list they considered less important or lacking an evidence base. The revised list of criteria was then validated using the Delphi consensus methodology.\n\n\nRESULTS\nThe expert panel agreed a final list of 114 criteria after two Delphi validation rounds, i.e. 80 STOPP criteria and 34 START criteria. This represents an overall 31% increase in STOPP/START criteria compared with version 1. Several new STOPP categories were created in version 2, namely antiplatelet/anticoagulant drugs, drugs affecting, or affected by, renal function and drugs that increase anticholinergic burden; new START categories include urogenital system drugs, analgesics and vaccines.\n\n\nCONCLUSION\nSTOPP/START version 2 criteria have been expanded and updated for the purpose of minimizing inappropriate prescribing in older people. These criteria are based on an up-to-date literature review and consensus validation among a European panel of experts.",
"title": ""
},
{
"docid": "1bb5e01e596d09e4ff89d7cb864ff205",
"text": "A number of recent approaches have used deep convolutional neural networks (CNNs) to build texture representations. Nevertheless, it is still unclear how these models represent texture and invariances to categorical variations. This work conducts a systematic evaluation of recent CNN-based texture descriptors for recognition and attempts to understand the nature of invariances captured by these representations. First we show that the recently proposed bilinear CNN model [25] is an excellent generalpurpose texture descriptor and compares favorably to other CNN-based descriptors on various texture and scene recognition benchmarks. The model is translationally invariant and obtains better accuracy on the ImageNet dataset without requiring spatial jittering of data compared to corresponding models trained with spatial jittering. Based on recent work [13, 28] we propose a technique to visualize pre-images, providing a means for understanding categorical properties that are captured by these representations. Finally, we show preliminary results on how a unified parametric model of texture analysis and synthesis can be used for attribute-based image manipulation, e.g. to make an image more swirly, honeycombed, or knitted. The source code and additional visualizations are available at http://vis-www.cs.umass.edu/texture.",
"title": ""
},
{
"docid": "44e310ba974f371605f6b6b6cd0146aa",
"text": "This section is a collection of shorter “Issue and Opinions” pieces that address some of the critical challenges around the evolution of digital business strategy. These voices and visions are from thought leaders who, in addition to their scholarship, have a keen sense of practice. They outline through their opinion pieces a series of issues that will need attention from both research and practice. These issues have been identified through their observation of practice with the eye of a scholar. They provide fertile opportunities for scholars in information systems, strategic management, and organizational theory.",
"title": ""
},
{
"docid": "83d7e1d26b81be3284cf2feddabc3fa5",
"text": "The impact of Jean Piaget’s account of cognitive development (e.g. Piaget, 1954; Piaget & Inhelder, 2000) on developmental psychology is practically incalculable (Flavell, 1996; Miller, 1993; Slater, Hocking & Loose, 2003) but the latter part of the 20 th Century saw his theories under attack from several quarters. What follows is a discussion of challenges to his account of the development of the object concept.",
"title": ""
},
{
"docid": "a8b7d6b3a43d39c8200e7787c3d58a0e",
"text": "Being Scrum the agile software development framework most commonly used in the software industry, its applicability is attracting great attention to the academia. That is why this topic is quite often included in computer science and related university programs. In this article, we present a course design of a Software Engineering course where an educational framework and an open-source agile project management tool were used to develop real-life projects by undergraduate students. During the course, continuous guidance was given by the teaching staff to facilitate the students' learning of Scrum. Results indicate that students find it easy to use the open-source tool and helpful to apply Scrum to a real-life project. However, the unavailability of the client and conflicts among the team members have negative impact on the realization of projects. The guidance given to students along the course helped identify five common issues faced by students through the learning process.",
"title": ""
}
] |
scidocsrr
|
643ddb76194257af654dbf50a3792357
|
OBJECTIVE VIDEO QUALITY ASSESSMENT
|
[
{
"docid": "6d0f0c11710945f49cc319b25aa5e9d2",
"text": "A computational approach for analyzing visible textures is described. Textures are modeled as irradiance patterns containing a limited range of spatial frequencies, where mutually distinct textures differ significantly in their dominant characterizing frequencies. By encoding images into multiple narrow spatial frequency and orientation channels, the slowly-varying channel envelopes (amplitude and phase) are used to segregate textural regions of different spatial frequency, orientation, or phase characteristics. Thus, an interpretation of image texture as a region code, or currier of region information, is",
"title": ""
}
] |
[
{
"docid": "65f4b9a23983e3416014167e52cdf064",
"text": "A soft-switching bidirectional dc-dc converter (BDC) with a coupled-inductor and a voltage doubler cell is proposed for high step-up/step-down voltage conversion applications. A dual-active half-bridge (DAHB) converter is integrated into a conventional buck-boost BDC to extend the voltage gain dramatically and decrease switch voltage stresses effectively. The coupled inductor operates not only as a filter inductor of the buck-boost BDC, but also as a transformer of the DAHB converter. The input voltage of the DAHB converter is shared with the output of the buck-boost BDC. So, PWM control can be adopted to the buck-boost BDC to ensure that the voltage on the two sides of the DAHB converter is always matched. As a result, the circulating current and conduction losses can be lowered to improve efficiency. Phase-shift control is adopted to the DAHB converter to regulate the power flows of the proposed BDC. Moreover, zero-voltage switching (ZVS) is achieved for all the active switches to reduce the switching losses. The operational principles and characteristics of the proposed BDC are presented in detail. The analysis and performance have been fully validated experimentally on a 40-60 V/400 V 1-kW hardware prototype.",
"title": ""
},
{
"docid": "e93f4f5c5828a7e82819964bbd29f8d4",
"text": "BACKGROUND\nAlthough hyaluronic acid (HA) specifications such as molecular weight and particle size are fairly well characterized, little information about HA ultrastructural and morphologic characteristics has been reported in clinical literature.\n\n\nOBJECTIVE\nTo examine uniformity of HA structure, the effects of extrusion, and lidocaine dilution of 3 commercially available HA soft-tissue fillers.\n\n\nMATERIALS AND METHODS\nUsing scanning electron microscopy and energy-dispersive x-ray analysis, investigators examined the soft-tissue fillers at various magnifications for ultrastructural detail and elemental distributions.\n\n\nRESULTS\nAll HAs contained oxygen, carbon, and sodium, but with uneven distributions. Irregular particulate matter was present in RES but BEL and JUV were largely particle free. Spacing was more uniform in BEL than JUV and JUV was more uniform than RES. Lidocaine had no apparent effect on morphology; extrusion through a 30-G needle had no effect on ultrastructure.\n\n\nCONCLUSION\nDescriptions of the ultrastructural compositions and nature of BEL, JUV, and RES are helpful for matching the areas to be treated with the HA soft-tissue filler architecture. Lidocaine and extrusion through a 30-G needle exerted no influence on HA structure. Belotero Balance shows consistency throughout the syringe and across manufactured lots.",
"title": ""
},
{
"docid": "f044e06469bfe2bf362d04b69aa52344",
"text": "5G network is anticipated to meet the challenging requirements of mobile traffic in the 2020's, which are characterized by super high data rate, low latency, high mobility, high energy efficiency, and high traffic density. This paper provides an overview of China Mobile's 5G vision and potential solutions. Targeting a paradigm shift to user-centric network operation from the traditional cell-centric operation, 5G radio access network (RAN) design considerations are presented, including RAN restructure, Turbo charged edge, core network (CN) and RAN function repartition, and network slice as a service. Adaptive multiple connections in the user-centric operation is further investigated, where the decoupled downlink and uplink, decoupled control and data, and adaptive multiple connections provide sufficient means to achieve a 5G network with “no more cells.” Software-defined air interface (SDAI) is presented under a unified framework, in which the frame structure, waveform, multiple access, duplex mode, and antenna configuration can be adaptively configured. New paradigm of 5G network featuring user-centric network (UCN) and SDAI is needed to meet the diverse yet extremely stringent requirements across the broad scope of 5G scenarios.",
"title": ""
},
{
"docid": "fb7f079d104e81db41b01afe67cdf3b0",
"text": "In this paper, we address natural human-robot interaction (HRI) in a smart assisted living (SAIL) system for the elderly and the disabled. Two common HRI problems are studied: hand gesture recognition and daily activity recognition. For hand gesture recognition, we implemented a neural network for gesture spotting and a hierarchical hidden Markov model for context-based recognition. For daily activity recognition, a multisensor fusion scheme is developed to process motion data collected from the foot and the waist of a human subject. Experiments using a prototype wearable sensor system show the effectiveness and accuracy of our algorithms.",
"title": ""
},
{
"docid": "01cf7cb5dd78d5f7754e1c31da9a9eb9",
"text": "Today ́s Electronic Industry is changing at a high pace. The root causes are manifold. So world population is growing up to eight billions and gives new challenges in terms of urbanization, mobility and connectivity. Consequently, there will raise up a lot of new business models for the electronic industry. Connectivity will take a large influence on our lives. Concepts like Industry 4.0, internet of things, M2M communication, smart homes or communication in or to cars are growing up. All these applications are based on the same demanding requirement – a high amount of data and increased data transfer rate. These arguments bring up large challenges to the Printed Circuit Board (PCB) design and manufacturing. This paper investigates the impact of different PCB manufacturing technologies and their relation to their high frequency behavior. In the course of the paper a brief overview of PCB manufacturing capabilities is be presented. Moreover, signal losses in terms of frequency, design, manufacturing processes, and substrate materials are investigated. The aim of this paper is, to develop a concept to use materials in combination with optimized PCB manufacturing processes, which allows a significant reduction of losses and increased signal quality. First analysis demonstrate, that for increased signal frequency, demanded by growing data transfer rate, the capabilities to manufacture high frequency PCBs become a key factor in terms of losses. Base materials with particularly high speed properties like very low dielectric constants are used for efficient design of high speed data link lines. Furthermore, copper foils with very low treatment are to be used to minimize loss caused by the skin effect. In addition to the materials composition, the design of high speed circuits is optimized with the help of comprehensive simulations studies. The work on this paper focuses on requirements and main questions arising during the PCB manufacturing process in order to improve the system in terms of losses. For that matter, there are several approaches that can be used. For example, the optimization of the structuring process, the use of efficient interconnection capabilities, and dedicated surface finishing can be used to reduce losses and preserve signal integrity. In this study, a comparison of different PCB manufacturing processes by using measurement results of demonstrators that imitate real PCB applications will be discussed. Special attention has be drawn to the manufacturing capabilities which are optimized for high frequency requirements and focused to avoid signal loss. Different line structures like microstrip lines, coplanar waveguides, and surface integrated waveguides are used for this assessment. This research was carried out by Austria Technologie & Systemtechnik AG (AT&S AG), in cooperation with Vienna University of Technology, Institute of Electrodynamics, Microwave and Circuit Engineering. Introduction Several commercially available PCB fabrication processes exist for manufacturing PCBs. In this paper two methods, pattern plating and panel plating, were utilized for manufacturing the test samples. The first step in both described manufacturing processes is drilling, which allows connections in between different copper layers. The second step for pattern plating (see figure 1) is the flash copper plating process, wherein only a thin copper skin (flash copper) is plated into the drilled holes and over the entire surface. On top of the plated copper a layer of photosensitive etch resist is laminated which is imaged subsequently by ultraviolet (UV) light with a negative film. Negative film imaging is exposing the gaps in between the traces to the UV light. In developing process the non-exposed dry film is removed with a sodium solution. After that, the whole surrounding space is plated with copper and is eventually covered by tin. The tin layer protects the actual circuit pattern during etching. The pattern plating process shows typically a smaller line width tolerance, compared to panel plating, because of a lower copper thickness before etching. The overall process tolerance for narrow dimensions in the order of several tenths of μm is approximately ± 10%. As originally published in the IPC APEX EXPO Conference Proceedings.",
"title": ""
},
{
"docid": "04d9bc52997688b48e70e91a43a145ef",
"text": "Post-weaning social isolation (PSI) has been shown to increase aggressive behavior and alter medial prefrontal cortex (mPFC) function in social species such as rats. Here we developed a novel escapable social interaction test (ESIT) allowing for the quantification of escape and social behaviors in addition to mPFC activation in response to an aggressive or nonaggressive stimulus rat. Male rats were exposed to 3 weeks of PSI (ISO) or group (GRP) housing, and exposed to 3 trials, with either no trial, all trials, or the last trial only with a stimulus rat. Analysis of social behaviors indicated that ISO rats spent less time in the escape chamber and more time engaged in social interaction, aggressive grooming, and boxing than did GRP rats. Interestingly, during the third trial all rats engaged in more of the quantified social behaviors and spent less time escaping in response to aggressive but not nonaggressive stimulus rats. Rats exposed to nonaggressive stimulus rats on the third trial had greater c-fos and ARC immunoreactivity in the mPFC than those exposed to an aggressive stimulus rat. Conversely, a social encounter produced an increase in large PSD-95 punctae in the mPFC independently of trial number, but only in ISO rats exposed to an aggressive stimulus rat. The results presented here demonstrate that PSI increases interaction time and aggressive behaviors during escapable social interaction, and that the aggressiveness of the stimulus rat in a social encounter is an important component of behavioral and neural outcomes for both isolation and group-reared rats.",
"title": ""
},
{
"docid": "4ee641270c1679675a7b563245f41f73",
"text": "MLC STT-MRAM (Multi-level Cell Spin-Transfer Torque Magnetic RAM), an emerging non-volatile memory technology, has become a promising candidate to construct L2 caches for high-end embedded processors. However, the long write latency limits the effectiveness of MLC STT-MRAM based L2 caches. In this paper, we address this limitation with two novel designs: Line Pairing (LP) and Line Swapping (LS). LP forms fast cachelines by re-organizing MLC soft bits which are faster to write. LS dynamically stores frequently written data into these fast cachelines. Our experimental results show that LP and LS improve system performance by 15% and reduce energy consumption by 21%.",
"title": ""
},
{
"docid": "881b8f167ea9d9d943a48a9d3f6c1264",
"text": "This paper presents an application of recurrent networks for phone probability estimation in large vocabulary speech recognition. The need for efficient exploitation of context information is discussed; a role for which the recurrent net appears suitable. An overview of early developments of recurrent nets for phone recognition is given along with the more recent improvements that include their integration with Markov models. Recognition results are presented for the DARPA TIMIT and Resource Management tasks, and it is concluded that recurrent nets are competitive with traditional means for performing phone probability estimation.",
"title": ""
},
{
"docid": "252f7393393a7ef16eda8388d601ef00",
"text": "In computer vision, moving object detection and tracking methods are the most important preliminary steps for higher-level video analysis applications. In this frame, background subtraction (BS) method is a well-known method in video processing and it is based on frame differencing. The basic idea is to subtract the current frame from a background image and to classify each pixel either as foreground or background by comparing the difference with a threshold. Therefore, the moving object is detected and tracked by using frame differencing and by learning an updated background model. In addition, simulated annealing (SA) is an optimization technique for soft computing in the artificial intelligence area. The p-median problem is a basic model of discrete location theory of operational research (OR) area. It is a NP-hard combinatorial optimization problem. The main aim in the p-median problem is to find p number facility locations, minimize the total weighted distance between demand points (nodes) and the closest facilities to demand points. The SA method is used to solve the p-median problem as a probabilistic metaheuristic. In this paper, an SA-based hybrid method called entropy-based SA (EbSA) is developed for performance optimization of BS, which is used to detect and track object(s) in videos. The SA modification to the BS method (SA–BS) is proposed in this study to determine the optimal threshold for the foreground-background (i.e., bi-level) segmentation and to learn background model for object detection. At these segmentation and learning stages, all of the optimization problems considered in this study are taken as p-median problems. Performances of SA–BS and regular BS methods are measured using four videoclips. Therefore, these results are evaluated quantitatively as the overall results of the given method. The obtained performance results and statistical analysis (i.e., Wilcoxon median test) show that our proposed method is more preferable than regular BS method. Meanwhile, the contribution of this",
"title": ""
},
{
"docid": "933312292c64c916e69357c5aec42189",
"text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.",
"title": ""
},
{
"docid": "e9f05136c60328f8b87cf51621c93a4b",
"text": "Accurate and timely detection of weeds between and within crop rows in the early growth stage is considered one of the main challenges in site-specific weed management (SSWM). In this context, a robust and innovative automatic object-based image analysis (OBIA) algorithm was developed on Unmanned Aerial Vehicle (UAV) images to design early post-emergence prescription maps. This novel algorithm makes the major contribution. The OBIA algorithm combined Digital Surface Models (DSMs), orthomosaics and machine learning techniques (Random Forest, RF). OBIA-based plant heights were accurately estimated and used as a feature in the automatic sample selection by the RF classifier; this was the second research contribution. RF randomly selected a class balanced training set, obtained the optimum features values and classified the image, requiring no manual training, making this procedure time-efficient and more accurate, since it removes errors due to a subjective manual task. The ability to discriminate weeds was significantly affected by the imagery spatial resolution and weed density, making the use of higher spatial resolution images more suitable. Finally, prescription maps for in-season post-emergence SSWM were created based on the weed maps—the third research contribution—which could help farmers in decision-making to optimize crop management by rationalization of the herbicide application. The short time involved in the process (image capture and analysis) would allow timely weed control during critical periods, crucial for preventing yield loss.",
"title": ""
},
{
"docid": "4cd8a9f4dbe713be59b540968b5114f7",
"text": "ConvNets and ImageNet have driven the recent success of deep learning for image classification. However, the marked slowdown in performance improvement combined with the lack of robustness of neural networks to adversarial examples and their tendency to exhibit undesirable biases question the reliability of these methods. This work investigates these questions from the perspective of the end-user by using human subject studies and explanations. The contribution of this study is threefold. We first experimentally demonstrate that the accuracy and robustness of ConvNets measured on ImageNet are vastly underestimated. Next, we show that explanations can mitigate the impact of misclassified adversarial examples from the perspective of the end-user. We finally introduce a novel tool for uncovering the undesirable biases learned by a model. These contributions also show that explanations are a valuable tool both for improving our understanding of ConvNets’ predictions and for designing more reliable models.",
"title": ""
},
{
"docid": "b1599614c7d91462d05d35808d7e2983",
"text": "Hyponatremia and hypernatremia are complex clinical problems that occur frequently in full term newborns and in preterm infants admitted to the Neonatal Intensive Care Unit (NICU) although their real frequency and etiology are incompletely known. Pathogenetic mechanisms and clinical timing of hypo-hypernatremia are well known in adult people whereas in the newborn is less clear how and when hypo-hypernatremia could alter cerebral osmotic equilibrium and after how long time brain cells adapt themselves to the new hypo-hypertonic environment. Aim of this review is to present a practical approach and management of hypo-hypernatremia in newborns, especially in preterms.",
"title": ""
},
{
"docid": "05941fa5fe1d7728d9bce44f524ff17f",
"text": "legend N2D N1D 2LPEG N2D vs. 2LPEG N1D vs. 2LPEG EFFICACY Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 272 Primary endpoint: Patients with successful overall bowel cleansing efficacy (HCS) [n] 253 (92.0%) 245 (89.1%) 238 (87.5%) -4.00%* [0.055] -6.91%* [0.328] Supportive secondary endpoint: Patients with successful overall bowel cleansing efficacy (BBPS) [n] 249 (90.5%) 243 (88.4%) 232 (85.3%) n.a. n.a. Primary endpoint: Excellent plus Good cleansing rate in colon ascendens (primary analysis set) [n] 87 (31.6%) 93 (33.8%) 41 (15.1%) 8.11%* [50.001] 10.32%* [50.001] Key secondary endpoint: Adenoma detection rate, colon ascendens 11.6% 11.6% 8.1% -4.80%; 12.00%** [0.106] -4.80%; 12.00%** [0.106] Key secondary endpoint: Adenoma detection rate, overall colon 26.6% 27.6% 26.8% -8.47%; 8.02%** [0.569] -7.65%; 9.11%** [0.455] Key secondary endpoint: Polyp detection rate, colon ascendens 23.3% 18.6% 16.2% -1.41%; 15.47%** [0.024] -6.12%; 10.82%** [0.268] Key secondary endpoint: Polyp detection rate, overall colon 44.0% 45.1% 44.5% -8.85%; 8.00%** [0.579] –7.78%; 9.09%** [0.478] Compliance rates (min 75% of both doses taken) [n] 235 (85.5%) 233 (84.7%) 245 (90.1%) n.a. n.a. SAFETY Safety set, n1⁄4 262 Safety set, n1⁄4 269 Safety set, n1⁄4 263 All treatment-emergent adverse events [n] 77 89 53 n.a. n.a. Patients with any related treatment-emergent adverse event [n] 30 (11.5%) 40 (14.9%) 20 (7.6%) n.a. n.a. *1⁄4 97.5% 1-sided CI; **1⁄4 95% 2-sided CI; n.a.1⁄4 not applicable. United European Gastroenterology Journal 4(5S) A219",
"title": ""
},
{
"docid": "dfe82129fd128cc2e42f9ed8b3efc9c7",
"text": "In this paper we present a new lossless image compression algorithm. To achieve the high compression speed we use a linear prediction, modified Golomb–Rice code family, and a very fast prediction error modeling method. We compare the algorithm experimentally with others for medical and natural continuous tone grayscale images of depths of up to 16 bits. Its results are especially good for big images, for natural images of high bit depths, and for noisy images. The average compression speed on Intel Xeon 3.06 GHz CPU is 47 MB/s. For big images the speed is over 60MB/s, i.e., the algorithm needs less than 50 CPU cycles per byte of image.",
"title": ""
},
{
"docid": "d9471b93ddb5cedfeebd514f9ed6f9af",
"text": "Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks.",
"title": ""
},
{
"docid": "44618874fe7725890fbfe9fecde65853",
"text": "Software development teams in large scale offshore enterprise development programmes are often under intense pressure to deliver high quality software within challenging time contraints. Project failures can attract adverse publicity and damage corporate reputations. Agile methods have been advocated to reduce project risks, improving both productivity and product quality. This article uses practitioner descriptions of agile method tailoring to explore large scale offshore enterprise development programmes with a focus on product owner role tailoring, where the product owner identifies and prioritises customer requirements. In globalised projects, the product owner must reconcile competing business interests, whilst generating and then prioritising large numbers of requirements for numerous development teams. The study comprises eight international companies, based in London, Bangalore and Delhi. Interviews with 46 practitioners were conducted between February 2010 and May 2012. Grounded theory was used to identify that product owners form into teams. The main contribution of this research is to describe the nine product owner team functions identified: groom, prioritiser, release master, technical architect, governor, communicator, traveller, intermediary and risk assessor. These product owner functions arbitrate between conflicting customer requirements, approve release schedules, disseminate architectural design decisions, provide technical governance and propogate information across teams. The functions identified in this research are mapped to a scrum of scrums process, and a taxonomy of the functions shows how focusing on either decision-making or information dissemination in each helps to tailor agile methods to large scale offshore enterprise development programmes.",
"title": ""
},
{
"docid": "422183692a08138189271d4d7af407c7",
"text": "Scene flow describes the motion of 3D objects in real world and potentially could be the basis of a good feature for 3D action recognition. However, its use for action recognition, especially in the context of convolutional neural networks (ConvNets), has not been previously studied. In this paper, we propose the extraction and use of scene flow for action recognition from RGB-D data. Previous works have considered the depth and RGB modalities as separate channels and extract features for later fusion. We take a different approach and consider the modalities as one entity, thus allowing feature extraction for action recognition at the beginning. Two key questions about the use of scene flow for action recognition are addressed: how to organize the scene flow vectors and how to represent the long term dynamics of videos based on scene flow. In order to calculate the scene flow correctly on the available datasets, we propose an effective self-calibration method to align the RGB and depth data spatially without knowledge of the camera parameters. Based on the scene flow vectors, we propose a new representation, namely, Scene Flow to Action Map (SFAM), that describes several long term spatio-temporal dynamics for action recognition. We adopt a channel transform kernel to transform the scene flow vectors to an optimal color space analogous to RGB. This transformation takes better advantage of the trained ConvNets models over ImageNet. Experimental results indicate that this new representation can surpass the performance of state-of-the-art methods on two large public datasets.",
"title": ""
}
] |
scidocsrr
|
288bb9b51e2d6cf4ee6c7fbcffc650e8
|
Research Note - Gamification of Technology-Mediated Training: Not All Competitions Are the Same
|
[
{
"docid": "f4641f1aa8c2553bb41e55973be19811",
"text": "this paper focuses on employees’ e-learning processes during online job training. A new categorization of self-regulated learning strategies, that is, personal versus social learning strategies, is proposed, and measurement scales are developed. the new measures were tested using data collected from employees in a large company. Our approach provides context-relevant insights into online training providers and employees themselves. the results suggest that learners adopt different self-regulated learning strategies resulting in different e-learning outcomes. Furthermore, the use of self-regulated learning strategies is influenced by individual factors such as virtual competence and goal orientation, and job and contextual factors such as intellectual demand and cooperative norms. the findings can (1) help e-learners obtain better learning outcomes through their active use of varied learning strategies, (2) provide useful information for organizations that are currently using or plan to use e-learning 308 WAN, COMPEAu, AND hAggErty for training, and (3) inform software designers to integrate self-regulated learning strategy support in e-learning system design and development. Key WorDs anD phrases: e-learning, job training, learning outcomes, learning processes, self-regulated learning strategies, social cognitive theory. employee training has beCome an effeCtive Way to enhance organizational productivity. It is even more important today given the fast-changing nature of current work practices. research has shown that 50 percent of all employee skills become outdated within three to five years [67]. the cycle is even shorter for information technology (It) professionals because of the high rate of technology innovation. On the one hand, this phenomenon requires organizations to focus more on building internal capabilities by providing different kinds of job preparation and training. On the other hand, it suggests that a growing number of employees are seeking learning opportunities to regularly upgrade their skills and competencies. Consequently, demand is growing for ongoing research to determine optimal training approaches with real performance impact. unlike traditional courses provided by educational institutions that are focused on fundamental and relatively stable knowledge, corporate training programs must be developed within short time frames because their content quickly becomes outdated. Furthermore, for many large organizations, especially multinationals with constantly growing and changing global workforces, the management of training and learning has become increasingly complex. Difficulties arise due to the wide range of courses, the high volume of course materials, the coordination of training among distributed work locations with the potential for duplicated training services, the need to satisfy varied individual learning requests and competency levels, and above all, the need to contain costs while deriving value from training expenditures. the development of information systems (IS) has contributed immensely to solving workplace training problems. E-learning has emerged as a cost-effective way to deliver training at convenient times to a large number of employees in different locations. E-learning, defined as a virtual learning environment in which learners’ interactions with learning materials, peers, and instructors are mediated through Its, has become the fastest-growing form of education [4]. the American Society for training and Development found that even with the challenges of the recent economic crisis, u.S. organizations spent $134.07 billion on employee learning and development in 2008 [74], and earlier evidence suggested that close to 40 percent of training was delivered using e-learning technologies [73]. E-learning has been extended from its original application in It skill training to common business skill training, including management, leadership, communication, customer service, quality management, and human resource skills. Despite heavy investments in e-learning technologies, however, recent research suggests that organizations have not received the level of benefit from e-learning that was E-lEArNINg OutCOMES IN OrgANIZAtIONAl SEttINgS 309 originally anticipated [62]. One credible explanation has emerged from educational psychology showing that learners are neither motivated nor well prepared for the new e-learning environment [14]. Early IS research on e-learning focused on the technology design aspects of e-learning but has subsequently broadened to include all aspects of e-learning inputs (participant characteristics, technology design, instructional strategies), processes (psychological processes, learning behaviors), and outcomes (learning outcomes) [4, 55, 76]. however, less IS research has focused on the psychological processes users engage in that improve or limit their e-learning outcomes [76]. In this research, we contribute to the understanding of e-learning processes by bridging two bodies of literature, that is, self-regulated learning (Srl) in educational psychology and e-learning in IS research. More specifically, we focus on two research questions: RQ1: How do learners’ different e‐learning processes (e.g., using different SRL strategies) influence their learning outcomes? RQ2: How is a learner’s use of SRL strategies influenced by individual and con‐ textual factors salient within a business context? to address the first question, we extend prior research on Srl and propose a new conceptualization that distinguishes two types of Srl strategies: personal Srl strategies, such as self‐evaluation and goal setting and planning, for managing personally directed forms of learning; and social Srl strategies, such as seeking peer assistance and social comparison, for managing social-oriented forms of learning. Prior research (e.g., [64, 88]) suggests that the use of Srl strategies in general can improve learning outcomes. We propose to explore, describe, and measure a new type of Srl strategy—social Srl strategy—and to determine if it has an equally important influence on learning outcomes as the more widely studied personal Srl strategy. We theorize that both types of Srl strategies are influential during the learning process and expect they have different effects on e-learning outcomes. to examine the role of Srl strategies in e-learning, we situated the new constructs in a nomological network based on prior research [76]. this led to our second research question, which also deals more specifically with e-learning in business organizations. While research conducted in educational institutions can definitely inform business training practices, differences in the business context such as job requirements and competitive pressures may affect e-learning outcomes. From prior research we selected four antecedent factors that we hypothesize to be important influences on individual use of Srl strategies (both personal and our newly proposed social strategies). the first two are individual factors. learners’ goal orientation refers to the individual’s framing of the activity as either a performance or a mastery activity, where the former is associated with flawless performance and the latter is associated with developing capability [28]. Virtual competence, the second factor, reflects the individual’s capability to function in a virtual environment [78]. We also include two contextual factors that are particularly applicable to organizational settings: the intellectual demands of learners’ jobs and the group norms perceived by learners about cooperation among work group members. 310 WAN, COMPEAu, AND hAggErty In summary, this study contributes to e-learning research by focusing on adult learners’ Srl processes in job training contexts. It expands the nomological network of e-learning by identifying and elaborating social Srl strategy as an additional form of Srl strategy that is distinct from personal Srl strategy. We further test how different types of Srl strategies applied by learners during the e-learning process affect three types of e-learning outcomes. Our results suggest that learners using different Srl strategies achieve different learning outcomes and learners’ attributes and contextual factors do matter. theoretical background Social Cognitive theory and Self-regulation learning is the proCess of aCquiring, enhanCing, or moDifying an individual’s knowledge, skills, and values [39]. In this study, we apply social cognitive theory to investigate e-learning processes in organizational settings. Self-regulation is a distinctive feature of social cognitive theory and plays a central role in the theory’s application [56]. It refers to a set of principles and practices by which people monitor their own behaviors and consciously adjust those behaviors in pursuit of personal goals [8]. Srl is thus a proactive way of learning in which people manage their own learning processes. research has shown that self-regulated learners (i.e., individuals who intentionally manage their learning processes) can learn better than non-selfregulated learners in traditional academic and organizational training settings because they view learning as a systematic and controllable process and are willing to take greater responsibility for their learning [30, 64, 88, 92, 93]. the definition of Srl as the degree to which individuals are metacognitively, motivationally, and behaviorally active participants in their own learning process is an integration of previous research on learning strategies, metacognitive monitoring, self-concept perceptions, volitional strategies, and self-control [86, 89]. According to this conceptualization, Srl is a combination of three subprocesses: metacognitive processes, which include planning and organizing during learning; motivational processes, which include self-evaluation and self-consequences at various stages; and behavioral processes, which include sele",
"title": ""
}
] |
[
{
"docid": "eea49870d2ddd24a42b8b245edbb1fc0",
"text": "In this paper, we propose a novel encoder-decoder neural network model referred to as DeepBinaryMask for video compressive sensing. In video compressive sensing one frame is acquired using a set of coded masks (sensing matrix) from which a number of video frames, equal to the number of coded masks, is reconstructed. The proposed framework is an endto-end model where the sensing matrix is trained along with the video reconstruction. The encoder maps a video block to compressive measurements by learning the binary elements of the sensing matrix. The decoder is trained to map the measurements from a video patch back to a video block via several hidden layers of a Multi-Layer Perceptron network. The predicted video blocks are stacked together to recover the unknown video sequence. The reconstruction performance is found to improve when using the trained sensing mask from the network as compared to other mask designs such as random, across a wide variety of compressive sensing reconstruction algorithms. Finally, our analysis and discussion offers insights into understanding the characteristics of the trained mask designs that lead to the improved reconstruction quality.",
"title": ""
},
{
"docid": "e769f52b6e10ea1cf218deb8c95f4803",
"text": "To facilitate the task of reading and searching information, it became necessary to find a way to reduce the size of documents without affecting the content. The solution is in Automatic text summarization system, it allows, from an input text to produce another smaller and more condensed without losing relevant data and meaning conveyed by the original text. The research works carried out on this area have experienced lately strong progress especially in English language. However, researches in Arabic text summarization are very few and are still in their beginning. In this paper we expose a literature review of recent techniques and works on automatic text summarization field research, and then we focus our discussion on some works concerning automatic text summarization in some languages. We will discuss also some of the main problems that affect the quality of automatic text summarization systems. © 2015 AESS Publications. All Rights Reserved.",
"title": ""
},
{
"docid": "22a5c41441519d259d3be70a9413f1f5",
"text": "In this paper, a 3-degrees-of-freedom parallel manipulator developed by Tsai and Stamper known as the Maryland manipulator is considered. In order to provide dynamic analysis, three different sequential trajectories are taken into account. Two different control approaches such as the classical proportional-integral-derivative (PID) and fractional-order PID control are used to improve the tracking performance of the examined manipulator. Parameters of the controllers are determined by using pattern search algorithm and mathematical methods for the classical PID and fractional-order PID controllers, respectively. Design procedures for both controllers are given in detail. Finally, the corresponding results are compared. Performance analysis for both of the proposed controllers is confirmed by simulation results. It is observed that not only transient but also steady-state error values have been reduced with the aid of the PIλDμ controller for tracking control purpose. According to the obtained results, the fractional-order PIλDμ controller is more powerful than the optimally tuned PID for the Maryland manipulator tracking control. The main contribution of this paper is to determine the control action with the aid of the fractional-order PI λDμ controller different from previously defined controller structures. The determination of correct and accurate control action has great importance when high speed, high acceleration, and high accuracy needed for the trajectory tracking control of parallel mechanisms present unique challenges.",
"title": ""
},
{
"docid": "1c058d6a648b2190500340f762eeff78",
"text": "An ever-increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow, and super resolution. Hardware acceleration of these algorithms is essential to adopt these improvements in embedded and mobile computer vision systems. We present a new architecture, design, and implementation, as well as the first reported silicon measurements of such an accelerator, outperforming previous work in terms of power, area, and I/O efficiency. The manufactured device provides up to 196 GOp/s on 3.09 $\\text {mm}^{2}$ of silicon in UMC 65-nm technology and can achieve a power efficiency of 803 GOp/s/W. The massively reduced bandwidth requirements make it the first architecture scalable to TOp/s performance.",
"title": ""
},
{
"docid": "9a842e6c42c1fdd6af3885370d50005f",
"text": "Text classification is a fundamental problem in natural language processing. As a popular deep learning model, convolutional neural network(CNN) has demonstrated great success in this task. However, most existing CNN models apply convolution filters of fixed window size, thereby unable to learn variable n-gram features flexibly. In this paper, we present a densely connected CNN with multi-scale feature attention for text classification. The dense connections build short-cut paths between upstream and downstream convolutional blocks, which enable the model to compose features of larger scale from those of smaller scale, and thus produce variable n-gram features. Furthermore, a multi-scale feature attention is developed to adaptively select multi-scale features for classification. Extensive experiments demonstrate that our model obtains competitive performance against state-of-the-art baselines on six benchmark datasets. Attention visualization further reveals the model’s ability to select proper n-gram features for text classification. Our code is available at: https://github.com/wangshy31/DenselyConnected-CNN-with-Multiscale-FeatureAttention.git.",
"title": ""
},
{
"docid": "db9887ea5f96cd4439ca95ad3419407c",
"text": "Light-field cameras have now become available in both consumer and industrial applications, and recent papers have demonstrated practical algorithms for depth recovery from a passive single-shot capture. However, current light-field depth estimation methods are designed for Lambertian objects and fail or degrade for glossy or specular surfaces. The standard Lambertian photo-consistency measure considers the variance of different views, effectively enforcing point-consistency, i.e., that all views map to the same point in RGB space. This variance or point-consistency condition is a poor metric for glossy surfaces. In this paper, we present a novel theory of the relationship between light-field data and reflectance from the dichromatic model. We present a physically-based and practical method to estimate the light source color and separate specularity. We present a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points. We then show how the new metric can be used in combination with the standard Lambertian variance or point-consistency measure to give us results that are robust against scenes with glossy surfaces. With our analysis, we can also robustly estimate multiple light source colors and remove the specular component from glossy objects. We show that our method outperforms current state-of-the-art specular removal and depth estimation algorithms in multiple real world scenarios using the consumer Lytro and Lytro Illum light field cameras.",
"title": ""
},
{
"docid": "26c259c7b6964483d13a85938a11cf53",
"text": "In Natural Language Processing (NLP), research results from software engineering and software technology have often been neglected. This paper describes some factors that add complexity to the task of engineering reusable NLP systems (beyond conventional software systems). Current work in the area of design patterns and composition languages is described and claimed relevant for natural language processing. The benefits of NLP componentware and barriers to reuse are outlined, and the dichotomies “system versus experiment” and “toolkit versus framework” are discussed. It is argued that in order to live up to its name language engineering must not neglect component quality and architectural evaluation when reporting new NLP research.",
"title": ""
},
{
"docid": "ef1f5eaa9c6f38bbe791e512a7d89dab",
"text": "Lexical-semantic verb classifications have proved useful in supporting various natural language processing (NLP) tasks. The largest and the most widely deployed classification in English is Levin’s (1993) taxonomy of verbs and their classes. While this resource is attractive in being extensive enough for some NLP use, it is not comprehensive. In this paper, we present a substantial extension to Levin’s taxonomy which incorporates 57 novel classes for verbs not covered (comprehensively) by Levin. We also introduce 106 novel diathesis alternations, created as a side product of constructing the new classes. We demonstrate the utility of our novel classes by using them to support automatic subcategorization acquisition and show that the resulting extended classification has extensive coverage over the English verb lexicon.",
"title": ""
},
{
"docid": "7cff04976bf78c5d8a1b4338b2107482",
"text": "Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.",
"title": ""
},
{
"docid": "13db8fe917d303f942fcfb544440ec24",
"text": "In many types of information systems, users face an implicit tradeoff between disclosing personal information and receiving benefits, such as discounts by an electronic commerce service that requires users to divulge some personal information. While these benefits are relatively measurable, the value of privacy involved in disclosing the information is much less tangible, making it hard to design and evaluate information systems that manage personal information. Meanwhile, existing methods to assess and measure the value of privacy, such as self-reported questionnaires, are notoriously unrelated of real eworld behavior. To overcome this obstacle, we propose a methodology called VOPE (Value of Privacy Estimator), which relies on behavioral economics' Prospect Theory (Kahneman & Tversky, 1979) and valuates people's privacy preferences in information disclosure scenarios. VOPE is based on an iterative and responsive methodology in which users take or leave a transaction that includes a component of information disclosure. To evaluate the method, we conduct an empirical experiment (n 1⁄4 195), estimating people's privacy valuations in electronic commerce transactions. We report on the convergence of estimations and validate our results by comparing the values to theoretical projections of existing results (Tsai, Egelman, Cranor, & Acquisti, 2011), and to another independent experiment that required participants to rank the sensitivity of information disclosure transactions. Finally, we discuss how information systems designers and regulators can use VOPE to create and to oversee systems that balance privacy and utility. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "92008a84a80924ec8c0ad1538da2e893",
"text": "Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.",
"title": ""
},
{
"docid": "4f81901c2269cd4561dd04f59a04a473",
"text": "The advent of powerful acid-suppressive drugs, such as proton pump inhibitors (PPIs), has revolutionized the management of acid-related diseases and has minimized the role of surgery. The major and universally recognized indications for their use are represented by treatment of gastro-esophageal reflux disease, eradication of Helicobacter pylori infection in combination with antibiotics, therapy of H. pylori-negative peptic ulcers, healing and prophylaxis of non-steroidal anti-inflammatory drug-associated gastric ulcers and control of several acid hypersecretory conditions. However, in the last decade, we have witnessed an almost continuous growth of their use and this phenomenon cannot be only explained by the simple substitution of the previous H2-receptor antagonists, but also by an inappropriate prescription of these drugs. This endless increase of PPI utilization has created an important problem for many regulatory authorities in terms of increased costs and greater potential risk of adverse events. The main reasons for this overuse of PPIs are the prevention of gastro-duodenal ulcers in low-risk patients or the stress ulcer prophylaxis in non-intensive care units, steroid therapy alone, anticoagulant treatment without risk factors for gastro-duodenal injury, the overtreatment of functional dyspepsia and a wrong diagnosis of acid-related disorder. The cost for this inappropriate use of PPIs has become alarming and requires to be controlled. We believe that gastroenterologists together with the scientific societies and the regulatory authorities should plan educational initiatives to guide both primary care physicians and specialists to the correct use of PPIs in their daily clinical practice, according to the worldwide published guidelines.",
"title": ""
},
{
"docid": "f5f56d680fbecb94a08d9b8e5925228f",
"text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.",
"title": ""
},
{
"docid": "497d6e0bf6f582924745c7aa192579e7",
"text": "The versatility of humanoid robots in locomotion, full-body motion, interaction with unmodified human environments, and intuitive human-robot interaction led to increased research interest. Multiple smaller platforms are available for research, but these require a miniaturized environment to interact with–and often the small scale of the robot diminishes the influence of factors which would have affected larger robots. Unfortunately, many research platforms in the larger size range are less affordable, more difficult to operate, maintain and modify, and very often closed-source. In this work, we introduce NimbRo-OP2, an affordable, fully open-source platform in terms of both hardware and software. Being almost 135 cm tall and only 18 kg in weight, the robot is not only capable of interacting in an environment meant for humans, but also easy and safe to operate and does not require a gantry when doing so. The exoskeleton of the robot is 3D printed, which produces a lightweight and visually appealing design. We present all mechanical and electrical aspects of the robot, as well as some of the software features of our well-established open-source ROS software. The NimbRo-OP2 performed at RoboCup 2017 in Nagoya, Japan, where it won the Humanoid League AdultSize Soccer competition and Technical Challenge.",
"title": ""
},
{
"docid": "54af3c39dba9aafd5b638d284fd04345",
"text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).",
"title": ""
},
{
"docid": "318a4af201ed3563443dcbe89c90b6b4",
"text": "Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems. Keywords—Cloud Computing Security; Distributed Denial of Service; Intrusion Detection; Intrusion Prevention; Virtualisation",
"title": ""
},
{
"docid": "f1cfb30b328725121ed232381d43ac3a",
"text": "High-performance object detection relies on expensive convolutional networks to compute features, often leading to significant challenges in applications, e.g. those that require detecting objects from video streams in real time. The key to this problem is to trade accuracy for efficiency in an effective way, i.e. reducing the computing cost while maintaining competitive performance. To seek a good balance, previous efforts usually focus on optimizing the model architectures. This paper explores an alternative approach, that is, to reallocate the computation over a scale-time space. The basic idea is to perform expensive detection sparsely and propagate the results across both scales and time with substantially cheaper networks, by exploiting the strong correlations among them. Specifically, we present a unified framework that integrates detection, temporal propagation, and across-scale refinement on a Scale-Time Lattice. On this framework, one can explore various strategies to balance performance and cost. Taking advantage of this flexibility, we further develop an adaptive scheme with the detector invoked on demand and thus obtain improved tradeoff. On ImageNet VID dataset, the proposed method can achieve a competitive mAP 79.6% at 20 fps, or 79.0% at 62 fps as a performance/speed tradeoff.1",
"title": ""
},
{
"docid": "47faebfa7d65ebf277e57436cf7c2ca4",
"text": "Steganography is a method which can put data into a media without a tangible impact on the cover media. In addition, the hidden data can be extracted with minimal differences. In this paper, twodimensional discrete wavelet transform is used for steganography in 24-bit color images. This steganography is of blind type that has no need for original images to extract the secret image. In this algorithm, by the help of a structural similarity and a two-dimensional correlation coefficient, it is tried to select part of sub-band cover image instead of embedding location. These sub-bands are obtained by 3levels of applying the DWT. Also to increase the steganography resistance against cropping or insert visible watermark, two channels of color image is used simultaneously. In order to raise the security, an encryption algorithm based on Arnold transform was also added to the steganography operation. Because diversity of chaos scenarios is limited in Arnold transform, it could be improved by its mirror in order to increase the diversity of key. Additionally, an ability is added to encryption algorithm that can still maintain its efficiency against image crop. Transparency of steganography image is measured by the peak signalto-noise ratio that indicates the adequate transparency of steganography process. Extracted image similarity is also measured by two-dimensional correlation coefficient with more than 99% similarity. Moreover, steganography resistance against increasing and decreasing brightness and contrast, lossy compression, cropping image, changing scale and adding noise is acceptable",
"title": ""
},
{
"docid": "0edc89fbf770bbab2fb4d882a589c161",
"text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.",
"title": ""
},
{
"docid": "8d7a7bc2b186d819b36a0a8a8ba70e39",
"text": "Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use Graph Cuts or Belief Propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the Belief Propagation algorithm and the Graph Cuts algorithm on the same MRF’s, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by Graph Cuts have a lower energy than those produced with Belief Propagation, but this does not necessarily lead to increased performance relative to the ground-truth.",
"title": ""
}
] |
scidocsrr
|
a67dd6f5d3c53ff3f4d03be551c4df47
|
Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses
|
[
{
"docid": "8851824732fff7b160c7479b41cc423f",
"text": "The current generation of Massive Open Online Courses (MOOCs) attract a diverse student audience from all age groups and over 196 countries around the world. Researchers, educators, and the general public have recently become interested in how the learning experience in MOOCs differs from that in traditional courses. A major component of the learning experience is how students navigate through course content.\n This paper presents an empirical study of how students navigate through MOOCs, and is, to our knowledge, the first to investigate how navigation strategies differ by demographics such as age and country of origin. We performed data analysis on the activities of 140,546 students in four edX MOOCs and found that certificate earners skip on average 22% of the course content, that they frequently employ non-linear navigation by jumping backward to earlier lecture sequences, and that older students and those from countries with lower student-teacher ratios are more comprehensive and non-linear when navigating through the course.\n From these findings, we suggest design recommendations such as for MOOC platforms to develop more detailed forms of certification that incentivize students to deeply engage with the content rather than just doing the minimum necessary to earn a passing grade. Finally, to enable other researchers to reproduce and build upon our findings, we have made our data set and analysis scripts publicly available.",
"title": ""
},
{
"docid": "090cc7f7e5dbf925e0ded1ca5514c76e",
"text": "A general framework is presented to help understand the relationship between motivation and self-regulated learning. According to the framework, self-regulated learning can be facilitated by the adoption of mastery and relative ability goals and hindered by the adoption of extrinsic goals. In addition, positive self-e$cacy and task value beliefs can promote selfregulated behavior. Self-regulated learning is de\"ned as the strategies that students use to regulate their cognition (i.e., use of various cognitive and metacognitive strategies) as well as the use of resource management strategies that students use to control their learning. ( 1999 Published by Elsevier Science Ltd. All rights reserved. Recent models of self-regulated learning stress the importance of integrating both motivational and cognitive components of learning (Garcia & Pintrich, 1994; Pintrich, 1994; Pintrich & Schrauben, 1992). The purpose of this chapter is to describe how di!erent motivational beliefs may help to promote and sustain di!erent aspects of self-regulated learning. In order to accomplish this purpose, a model of self-regulated learning is brie#y sketched and three general motivational beliefs related to a model of self-regulated learning in our research program at the University of Michigan are discussed. Finally, suggestions for future research are o!ered. 1. A model of self-regulated learning Self-regulated learning o!ers an important perspective on academic learning in current research in educational psychology (Schunk & Zimmerman, 1994). Although there are a number of di!erent models derived from a variety of di!erent theoretical perspectives (see Schunk & Zimmerman, 1994; Zimmerman & Schunk, 1989), most models assume that an important aspect of self-regulated learning is the 0883-0355/99/$ see front matter ( 1999 Published by Elsevier Science Ltd. All rights reserved. PII: S 0 8 8 3 0 3 5 5 ( 9 9 ) 0 0 0 1 5 4 students' use of various cognitive and metacognitive strategies to control and regulate their learning. The model of self-regulated learning described here includes three general categories of strategies: (1) cognitive learning strategies, (2) self-regulatory strategies to control cognition, and (3) resource management strategies (see Garcia & Pintrich, 1994; Pintrich, 1988a,b; Pintrich, 1989; Pintrich & De Groot, 1990; Pintrich & Garcia, 1991; Pintrich, Smith, Garcia, & McKeachie, 1993). 1.1. Cognitive learning strategies In terms of cognitive learning strategies, following the work of Weinstein and Mayer (1986), rehearsal, elaboration, and organizational strategies were identi\"ed as important cognitive strategies related to academic performance in the classroom (McKeachie, Pintrich, Lin & Smith, 1986; Pintrich, 1989; Pintrich & De Groot, 1990). These strategies can be applied to simple memory tasks (e.g., recall of information, words, or lists) or to more complex tasks that require comprehension of the information (e.g., understanding a piece of text or a lecture) (Weinstein & Mayer, 1986). Rehearsal strategies involve the recitation of items to be learned or the saying of words aloud as one reads a piece of text. Highlighting or underlining text in a rather passive and unre#ective manner also can be more like a rehearsal strategy than an elaborative strategy. These rehearsal strategies are assumed to help the student attend to and select important information from lists or texts and keep this information active in working memory, albeit they may not re#ect a very deep level of processing. Elaboration strategies include paraphrasing or summarizing the material to be learned, creating analogies, generative note-taking (where the student actually reorganizes and connects ideas in their notes in contrast to passive, linear note-taking), explaining the ideas in the material to be learned to someone else, and question asking and answering (Weinstein & Mayer, 1986). The other general type of deeper processing strategy, organizational, includes behaviors such as selecting the main idea from text, outlining the text or material to be learned, and using a variety of speci\"c techniques for selecting and organizing the ideas in the material (e.g., sketching a network or map of the important ideas, identifying the prose or expository structures of texts). (See Weinstein & Mayer, 1986.) All of these organizational strategies have been shown to result in a deeper understanding of the material to be learned in contrast to rehearsal strategies (Weinstein & Mayer, 1986). 1.2. Metacognitive and self-regulatory strategies Besides cognitive strategies, students' metacognitive knowledge and use of metacognitive strategies can have an important in#uence upon their achievement. There are two general aspects of metacognition, knowledge about cognition and selfregulation of cognition (Brown, Bransford, Ferrara & Campione, 1983; Flavell, 1979). Some of the theoretical and empirical confusion over the status of metacognition as a psychological construct has been fostered by the confounding of issues of metacognitive knowledge and awareness with metacognitive control and self-regulation 460 P.R. Pintrich / Int. J. Educ. Res. 31 (1999) 459}470",
"title": ""
}
] |
[
{
"docid": "289502f02cf7ef236bb7752b4ca80601",
"text": "We examined variation in leaf size and specific leaf area (SLA) in relation to the distribution of 22 chaparral shrub species on small-scale gradients of aspect and elevation. Potential incident solar radiation (insolation) was estimated from a geographic information system to quantify microclimate affinities of these species across north- and south-facing slopes. At the community level, leaf size and SLA both declined with increasing insolation, based on average trait values for the species found in plots along the gradient. However, leaf size and SLA were not significantly correlated across species, suggesting that these two traits are decoupled and associated with different aspects of performance along this environmental gradient. For individual species, SLA was negatively correlated with species distributions along the insolation gradient, and was significantly lower in evergreen versus deciduous species. Leaf size exhibited a negative but non-significant trend in relation to insolation distribution of individual species. At the community level, variance in leaf size increased with increasing insolation. For individual species, there was a greater range of leaf size on south-facing slopes, while there was an absence of small-leaved species on north-facing slopes. These results demonstrate that analyses of plant functional traits along environmental gradients based on community level averages may obscure important aspects of trait variation and distribution among the constituent species.",
"title": ""
},
{
"docid": "04d190daef0abb78f3c4d85e23297fbc",
"text": "Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.",
"title": ""
},
{
"docid": "dffb192cda5fd68fbea2eb15a6b00434",
"text": "For AI systems to reason about real world situations, they need to recognize which processes are at play and which entities play key roles in them. Our goal is to extract this kind of rolebased knowledge about processes, from multiple sentence-level descriptions. This knowledge is hard to acquire; while semantic role labeling (SRL) systems can extract sentence level role information about individual mentions of a process, their results are often noisy and they do not attempt create a globally consistent characterization of a process. To overcome this, we extend standard within sentence joint inference to inference across multiple sentences. This cross sentence inference promotes role assignments that are compatible across different descriptions of the same process. When formulated as an Integer Linear Program, this leads to improvements over within-sentence inference by nearly 3% in F1. The resulting role-based knowledge is of high quality (with a F1 of nearly 82).",
"title": ""
},
{
"docid": "f78e430994e9eeccd034df76d2b5316a",
"text": "An externally leveraged circular resonant piezoelectric actuator with haptic natural frequency and fast response time was developed within the volume of 10 mm diameter and 3.4 mm thickness for application in mobile phones. An efficient displacement-amplifying mechanism was developed using a piezoelectric bimorph, a lever system, and a mass-spring system. The proposed displacement-amplifying mechanism utilizes both internally and externally leveraged structures. The former generates bending by means of bending deformation of the piezoelectric bimorph, and the latter transforms the bending to radial displacement of the lever system, which is transformed to a large axial displacement of the spring. The piezoelectric bimorph, lever system, and spring were designed to maximize static displacement and the mass-spring system was designed to have a haptic natural frequency. The static displacement, natural frequency, maximum output displacement, and response time of the resonant piezoelectric actuator were calculated by means of finite-element analyses. The proposed resonant piezoelectric actuator was prototyped and the simulated results were verified experimentally. The prototyped piezoelectric actuator generated the maximum output displacement of 290 μm at the haptic natural frequency of 242 Hz. Owing to the proposed efficient displacement-amplifying mechanism, the proposed resonant piezoelectric actuator had the fast response time of 14 ms, approximately one-fifth of a conventional resonant piezoelectric actuator of the same size.",
"title": ""
},
{
"docid": "caf6537362b79cad5f631c0227e7d141",
"text": "In this paper, we present POSTECH Situation-Based Dialogue Manager (POSSDM) for a spoken dialogue system using both example- and rule-based dialogue management techniques for effective generation of appropriate system responses. A spoken dialogue system should generate cooperative responses to smoothly control dialogue flow with the users. We introduce a new dialogue management technique incorporating dialogue examples and situation-based rules for the electronic program guide (EPG) domain. For the system response generation, we automatically construct and index a dialogue example database from the dialogue corpus, and the proper system response is determined by retrieving the best dialogue example for the current dialogue situation, which includes a current user utterance, dialogue act, semantic frame and discourse history. When the dialogue corpus is not enough to cover the domain, we also apply manually constructed situation-based rules mainly for meta-level dialogue management. Experiments show that our example-based dialogue modeling is very useful and effective in domain-oriented dialogue processing",
"title": ""
},
{
"docid": "23ffed5fcb708ad4f95a70f5b0fe4793",
"text": "INTRODUCTION\nHead tremor is a common feature in cervical dystonia (CD) and often less responsive to botulinum neurotoxin (BoNT) treatment than dystonic posturing. Ultrasound allows accurate targeting of deeper neck muscles.\n\n\nMETHODS\nIn 35 CD patients with dystonic head tremor the depth and thickness of the splenius capitis (SPL), semispinalis capitis and obliquus capitis inferior muscles (OCI) were assessed using ultrasound. Ultrasound guided EMG recordings were performed from the SPL and OCI.\n\n\nRESULTS\nBurst-like tremor activity was present in both OCI in 25 and in one in 10 patients. In 18 patients, tremor activity was present in one SPL and in 2 in both SPL. Depth and thickness of OCI, SPL and semispinalis capitis muscles were very variable.\n\n\nCONCLUSION\nMuscular activity underlying tremulous CD is most commonly present in OCI. Due to the variability of muscle thickness, we suggest ultrasound guided BoNT injections into OCI.",
"title": ""
},
{
"docid": "dfb3af39b0cf47540c1eda10eb4b35d9",
"text": "Activation likelihood estimation (ALE) meta-analyses were used to examine the neural correlates of prediction error in reinforcement learning. The findings are interpreted in the light of current computational models of learning and action selection. In this context, particular consideration is given to the comparison of activation patterns from studies using instrumental and Pavlovian conditioning, and where reinforcement involved rewarding or punishing feedback. The striatum was the key brain area encoding for prediction error, with activity encompassing dorsal and ventral regions for instrumental and Pavlovian reinforcement alike, a finding which challenges the functional separation of the striatum into a dorsal 'actor' and a ventral 'critic'. Prediction error activity was further observed in diverse areas of predominantly anterior cerebral cortex including medial prefrontal cortex and anterior cingulate cortex. Distinct patterns of prediction error activity were found for studies using rewarding and aversive reinforcers; reward prediction errors were observed primarily in the striatum while aversive prediction errors were found more widely including insula and habenula.",
"title": ""
},
{
"docid": "92c6e4ec2497c467eaa31546e2e2be0e",
"text": "The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.",
"title": ""
},
{
"docid": "f5960b6997d1b481353b50a15e80c844",
"text": "In this paper we introduce a dynamic GUI test generator that incorporates ant colony optimization. We created two ant systems for generating tests. Our first ant system implements the normal ant colony optimization algorithm in order to traverse the GUI and find good event sequences. Our second ant system, called AntQ, implements the antq algorithm that incorporates Q-Learning, which is a behavioral reinforcement learning technique. Both systems use the same fitness function in order to determine good paths through the GUI. Our fitness function looks at the amount of change in the GUI state that each event causes. Events that have a larger impact on the GUI state will be favored in future tests. We compared our two ant systems to random selection. We ran experiments on six subject applications and report on the code coverage and fault finding abilities of all three algorithms.",
"title": ""
},
{
"docid": "3f53b5e2143364506c4f2de4c8d98979",
"text": "In this paper, a different method for designing an ultra-wideband (UWB) microstrip monopole antenna with dual band-notched characteristic has been presented. The main novelty of the proposed structure is the using of protruded strips as resonators to design an UWB antenna with dual band-stop property. In the proposed design, by cutting the rectangular slot with a pair of protruded T-shaped strips in the ground plane, additional resonance is excited and much wider impedance bandwidth can be produced. To generate a single band-notched function, we convert the square radiating patch to the square-ring structure with a pair of protruded step-shaped strips. By cutting a rectangular slot with the protruded Γ-shaped strip at the feed line, a dual band-notched function is achieved. The measured results reveal that the presented dual band-notched antenna offers a very wide bandwidth from 2.8 to 11.6 GHz, with two notched bands, around of 3.3-3.7 GHz and 5-6 GHz covering all WiMAX and WLAN bands.",
"title": ""
},
{
"docid": "822a1487cbdeba5b8b3b35dd3593c4eb",
"text": "Microsoft's series of Windows operating systems represents some of the most commonly encountered technologies in the field of digital forensics. It is then fair to say that Microsoft's design decisions greatly affect forensic efforts. Because of this, it is exceptionally important for the forensics community to keep abreast of new developments in the Windows product line. With each new release, the Windows operating system may present investigators with significant new artifacts to explore. Described by some as the heart of the Windows operating system, the Windows registry has been proven to contain many of these forensically interesting artifacts. Given the weight of Microsoft's influence on digital forensics and the role of the registry within Windows operating systems, this thesis delves into the Windows 8 registry in the hopes of developing new Windows forensics utilities.",
"title": ""
},
{
"docid": "3378680ac3eddfde464e1be5ee6986e6",
"text": "Boundaries between formal and informal learning settings are shaped by influences beyond learners’ control. This can lead to the proscription of some familiar technologies that learners may like to use from some learning settings. This contested demarcation is not well documented. In this paper, we introduce the term ‘digital dissonance’ to describe this tension with respect to learners’ appropriation of Web 2.0 technologies in formal contexts. We present the results of a study that explores learners’ inand out-of-school use of Web 2.0 and related technologies. The study comprises two data sources: a questionnaire and a mapping activity. The contexts within which learners felt their technologies were appropriate or able to be used are also explored. Results of the study show that a sense of ‘digital dissonance’ occurs around learners’ experience of Web 2.0 activity in and out of school. Many learners routinely cross institutionally demarcated boundaries, but the implications of this activity are not well understood by institutions or indeed by learners themselves. More needs to be understood about the transferability of Web 2.0 skill sets and ways in which these can be used to support formal learning.",
"title": ""
},
{
"docid": "fdd4c5fc773aa001da927ab3776559ae",
"text": "We treated a 65-year-old Japanese man with a giant penile lymphedema due to chronic penile strangulation with a rubber band. He was referred to our hospital with progressive penile swelling that had developed over a period of 2 years from chronic use of a rubber band placed around the penile base for prevention of urinary incontinence. Under a diagnosis of giant penile lymphedema, we performed resection of abnormal penile skin weighing 4.8 kg, followed by a penile plasty procedure. To the best of our knowledge, this is only the seventh report of such a case worldwide, with the present giant penile lymphedema the most reported.",
"title": ""
},
{
"docid": "237ae26179780269fd814f0e2406f2c0",
"text": "There is a growing trend of applying machine learning techniques in time series prediction tasks. In the meanwhile, the classic autoregression models has been widely used in time series prediction for decades. In this paper, experiments are conducted to compare the performances of multiple popular machine learning algorithms including two major types of deep learning approaches, with the classic autoregression with exogenous inputs (ARX) model on this particular Blood Glucose Level Prediction (BGLP) Challenge. We tried two types of methods to perform multi-step prediction: recursive method and direct method. The recursive method needs future input feature information. The results show there is no significant difference between the machine learning models and the classic ARX model. In fact, the ARX model achieved the lowest average Root Mean Square Error (RMSE) across subjects in the test data when recursive method was used for multi-step prediction.",
"title": ""
},
{
"docid": "6aab23ee181e8db06cc4ca3f7f7367be",
"text": "In their original article, Ericsson, Krampe, and Tesch-Römer (1993) reviewed the evidence concerning the conditions of optimal learning and found that individualized practice with training tasks (selected by a supervising teacher) with a clear performance goal and immediate informative feedback was associated with marked improvement. We found that this type of deliberate practice was prevalent when advanced musicians practice alone and found its accumulated duration related to attained music performance. In contrast, Macnamara, Moreau, and Hambrick's (2016, this issue) main meta-analysis examines the use of the term deliberate practice to refer to a much broader and less defined concept including virtually any type of sport-specific activity, such as group activities, watching games on television, and even play and competitions. Summing up every hour of any type of practice during an individual's career implies that the impact of all types of practice activity on performance is equal-an assumption that I show is inconsistent with the evidence. Future research should collect objective measures of representative performance with a longitudinal description of all the changes in different aspects of the performance so that any proximal conditions of deliberate practice related to effective improvements can be identified and analyzed experimentally.",
"title": ""
},
{
"docid": "a677c1d46b9d2ad2588841eea8e3856c",
"text": "In evolutionary multiobjective optimization, maintaining a good balance between convergence and diversity is particularly crucial to the performance of the evolutionary algorithms (EAs). In addition, it becomes increasingly important to incorporate user preferences because it will be less likely to achieve a representative subset of the Pareto-optimal solutions using a limited population size as the number of objectives increases. This paper proposes a reference vector-guided EA for many-objective optimization. The reference vectors can be used not only to decompose the original multiobjective optimization problem into a number of single-objective subproblems, but also to elucidate user preferences to target a preferred subset of the whole Pareto front (PF). In the proposed algorithm, a scalarization approach, termed angle-penalized distance, is adopted to balance convergence and diversity of the solutions in the high-dimensional objective space. An adaptation strategy is proposed to dynamically adjust the distribution of the reference vectors according to the scales of the objective functions. Our experimental results on a variety of benchmark test problems show that the proposed algorithm is highly competitive in comparison with five state-of-the-art EAs for many-objective optimization. In addition, we show that reference vectors are effective and cost-efficient for preference articulation, which is particularly desirable for many-objective optimization. Furthermore, a reference vector regeneration strategy is proposed for handling irregular PFs. Finally, the proposed algorithm is extended for solving constrained many-objective optimization problems.",
"title": ""
},
{
"docid": "38ec5d33e0a24c9dc16854086bb069d7",
"text": "The management of the medium and small scale industries feel burden to treat waste if the cost involvement is high. Hence there is a board scope for cheaper and compact unit processes or ideal solutions for such issues. Rotating biological contactor is most popular due to its simplicity, low energy less land requirement. The rotating biological contactors are fixed film moving bed aerobic treatment processes, which able to sustain shock loadings. Unlike activated sludge processes (ASP), trickling filter etc. Rotating biological contactor does not require recirculation of secondary sludge and also hydraulic retention time is low. This review paper focuses on works done by various investigators at different operating parameters using various kinds of industrial wastewater.",
"title": ""
},
{
"docid": "c0f5abdba3aa843f4419f59c92ed14ea",
"text": "ROC and DET curves are often used in the field of person authentication to assess the quality of a model or even to compare several models. We argue in this paper that this measure can be misleading as it compares performance measures that cannot be reached simultaneously by all systems. We propose instead new curves, called Expected Performance Curves (EPC). These curves enable the comparison between several systems according to a criterion, decided by the application, which is used to set thresholds according to a separate validation set. A free sofware is available to compute these curves. A real case study is used throughout the paper to illustrate it. Finally, note that while this study was done on an authentication problem, it also applies to most 2-class classification tasks.",
"title": ""
},
{
"docid": "b229aa8b39b3df3fec941ce4791a2fe9",
"text": "Translating information between text and image is a fundamental problem in artificial intelligence that connects natural language processing and computer vision. In the past few years, performance in image caption generation has seen significant improvement through the adoption of recurrent neural networks (RNN). Meanwhile, text-to-image generation begun to generate plausible images using datasets of specific categories like birds and flowers. We've even seen image generation from multi-category datasets such as the Microsoft Common Objects in Context (MSCOCO) through the use of generative adversarial networks (GANs). Synthesizing objects with a complex shape, however, is still challenging. For example, animals and humans have many degrees of freedom, which means that they can take on many complex shapes. We propose a new training method called Image-Text-Image (I2T2I) which integrates text-to-image and image-to-text (image captioning) synthesis to improve the performance of text-to-image synthesis. We demonstrate that I2T2I can generate better multi-categories images using MSCOCO than the state-of-the-art. We also demonstrate that I2T2I can achieve transfer learning by using a pre-trained image captioning module to generate human images on the MPII Human Pose dataset (MHP) without using sentence annotation.",
"title": ""
}
] |
scidocsrr
|
aecacf022b621cd60dc51cd6b351686b
|
A Survey of Uncertain Data Algorithms and Applications
|
[
{
"docid": "5f1f7847600207d1216384f8507be63b",
"text": "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.",
"title": ""
}
] |
[
{
"docid": "b39d393c8fd817f487e8bdfd59d03a55",
"text": "This paper gives an overview of the upcoming IEEE Gigabit Wireless LAN amendments, i.e. IEEE 802.11ac and 802.11ad. Both standard amendments advance wireless networking throughput beyond gigabit rates. 802.11ac adds multi-user access techniques in the form of downlink multi-user (DL MU) multiple input multiple output (MIMO)and 80 and 160 MHz channels in the 5 GHz band for applications such as multiple simultaneous video streams throughout the home. 802.11ad takes advantage of the large swath of available spectrum in the 60 GHz band and defines protocols to enable throughput intensive applications such as wireless I/O or uncompressed video. New waveforms for 60 GHz include single carrier and orthogonal frequency division multiplex (OFDM). Enhancements beyond the new 60 GHz PHY include Personal Basic Service Set (PBSS) operation, directional medium access, and beamforming. We describe 802.11ac channelization, PHY design, MAC modifications, and DL MU MIMO. For 802.11ad, the new PHY layer, MAC enhancements, and beamforming are presented.",
"title": ""
},
{
"docid": "afa31fe73b190845f65a5e163b062acf",
"text": "Spatial variability in a crop field creates a need for precision agriculture. Economical and rapid means of identifying spatial variability is obtained through the use of geotechnology (remotely sensed images of the crop field, image processing, GIS modeling approach, and GPS usage) and data mining techniques for model development. Higher-end image processing techniques are followed to establish more precision. The goal of this paper was to investigate the strength of key spectral vegetation indices for agricultural crop yield prediction using neural network techniques. Four widely used spectral indices were investigated in a study of irrigated corn crop yields in the Oakes Irrigation Test Area research site of North Dakota, USA. These indices were: (a) red and near-infrared (NIR) based normalized difference vegetation index (NDVI), (b) green and NIR based green vegetation index (GVI), (c) red and NIR based soil adjusted vegetation index (SAVI), and (d) red and NIR based perpendicular vegetation index (PVI). These four indices were investigated for corn yield during 3 years (1998, 1999, and 2001) and for the pooled data of these 3 years. Initially, Back-propagation Neural Network (BPNN) models were developed, including 16 models (4 indices * 4 years including the data from the pooled years) to test for the efficiency determination of those four vegetation indices in corn crop yield prediction. The corn yield was best predicted using BPNN models that used the means and standard deviations of PVI grid images. In all three years, it provided higher prediction accuracies, OPEN ACCESS Remote Sensing 2010, 2 674 coefficient of determination (r), and lower standard error of prediction than the models involving GVI, NDVI, and SAVI image information. The GVI, NDVI, and SAVI models for all three years provided average testing prediction accuracies of 24.26% to 94.85%, 19.36% to 95.04%, and 19.24% to 95.04%, respectively while the PVI models for all three years provided average testing prediction accuracies of 83.50% to 96.04%. The PVI pool model provided better average testing prediction accuracy of 94% with respect to other vegetation models, for which it ranged from 89–93%. Similarly, the PVI pool model provided coefficient of determination (r) value of 0.45 as compared to 0.31–0.37 for other index models. Log10 data transformation technique was used to enhance the prediction ability of the PVI models of years 1998, 1999, and 2001 as it was chosen as the preferred index. Another model (Transformed PVI (Pool)) was developed using the log10 transformed PVI image information to show its global application. The transformed PVI models provided average corn yield prediction accuracies of 90%, 97%, and 98% for years 1998, 1999, and 2001, respectively. The pool PVI transformed model provided as average testing accuracy of 93% along with r value of 0.72 and standard error of prediction of 0.05 t/ha.",
"title": ""
},
{
"docid": "39d6a07bc7065499eb4cb0d8adb8338a",
"text": "This paper proposes a DNS Name Autoconfiguration (called DNSNA) for not only the global DNS names, but also the local DNS names of Internet of Things (IoT) devices. Since there exist so many devices in the IoT environments, it is inefficient to manually configure the Domain Name System (DNS) names of such IoT devices. By this scheme, the DNS names of IoT devices can be autoconfigured with the device's category and model in IPv6-based IoT environments. This DNS name lets user easily identify each IoT device for monitoring and remote-controlling in IoT environments. In the procedure to generate and register an IoT device's DNS name, the standard protocols of Internet Engineering Task Force (IETF) are used. Since the proposed scheme resolves an IoT device's DNS name into an IPv6 address in unicast through an authoritative DNS server, it generates less traffic than Multicast DNS (mDNS), which is a legacy DNS application for the DNS name service in IoT environments. Thus, the proposed scheme is more appropriate in global IoT networks than mDNS. This paper explains the design of the proposed scheme and its service scenario, such as smart road and smart home. The results of the simulation prove that our proposal outperforms the legacy scheme in terms of energy consumption.",
"title": ""
},
{
"docid": "c78c7b867a74d81afea11456b793cb52",
"text": "The problem of finding conflict-free trajectories for multiple agents of identical circular shape, operating in shared 2D workspace, is addressed in the paper and decoupled, e.g., prioritized, approach is used to solve this problem. Agents’ workspace is tessellated into the square grid on which anyangle moves are allowed, e.g. each agent can move into an arbitrary direction as long as this move follows the straight line segment whose endpoints are tied to the distinct grid elements. A novel any-angle planner based on Safe Interval Path Planning (SIPP) algorithm is proposed to find trajectories for an agent moving amidst dynamic obstacles (other agents) on a grid. This algorithm is then used as part of a prioritized multi-agent planner AA-SIPP(m). On the theoretical side, we show that AA-SIPP(m) is complete under well-defined conditions. On the experimental side, in simulation tests with up to 250 agents involved, we show that our planner finds much better solutions in terms of cost (up to 20%) compared to the planners relying on cardinal moves only.",
"title": ""
},
{
"docid": "39d15901cd5fbd1629d64a165a94c5f5",
"text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.",
"title": ""
},
{
"docid": "a478928c303153172133d805ac35c6cc",
"text": "Chest X-ray is one of the most accessible medical imaging technique for diagnosis of multiple diseases. With the availability of ChestX-ray14, which is a massive dataset of chest X-ray images and provides annotations for 14 thoracic diseases; it is possible to train Deep Convolutional Neural Networks (DCNN) to build Computer Aided Diagnosis (CAD) systems. In this work, we experiment a set of deep learning models and present a cascaded deep neural network that can diagnose all 14 pathologies better than the baseline and is competitive with other published methods. Our work provides the quantitative results to answer following research questions for the dataset: 1) What loss functions to use for training DCNN from scratch on ChestXray14 dataset that demonstrates high class imbalance and label co occurrence? 2) How to use cascading to model label dependency and to improve accuracy of the deep learning model?",
"title": ""
},
{
"docid": "799517016245ffa33a06795b26e308cc",
"text": "The goal of this ”proyecto fin de carrera” was to produce a review of the face detection and face recognition literature as comprehensive as possible. Face detection was included as a unavoidable preprocessing step for face recogntion, and as an issue by itself, because it presents its own difficulties and challenges, sometimes quite different from face recognition. We have soon recognized that the amount of published information is unmanageable for a short term effort, such as required of a PFC, so in agreement with the supervisor we have stopped at a reasonable time, having reviewed most conventional face detection and face recognition approaches, leaving advanced issues, such as video face recognition or expression invariances, for the future work in the framework of a doctoral research. I have tried to gather much of the mathematical foundations of the approaches reviewed aiming for a self contained work, which is, of course, rather difficult to produce. My supervisor encouraged me to follow formalism as close as possible, preparing this PFC report more like an academic report than an engineering project report.",
"title": ""
},
{
"docid": "d18fc16268e6853cef5002c147ae9827",
"text": "Ant Colony Extended (ACE) is a novel algorithm belonging to the general Ant Colony Optimisation (ACO) framework. Two specific features of ACE are: The division of tasks between two kinds of ants, namely patrollers and foragers, and the implementation of a regulation policy to control the number of each kind of ant during the searching process. This paper explores the performance of ACE in the context of the Travelling Salesman Problem (TSP), a classical combinatorial optimisation problem. The results are compared with the results of two well known ACO algorithms: ACS and MMAS.",
"title": ""
},
{
"docid": "4c004745828100f6ccc6fd660ee93125",
"text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.",
"title": ""
},
{
"docid": "7259530c42f4ba91155284ce909d25a6",
"text": "We investigate how information leakage reduces computational entropy of a random variable X. Recall that HILL and metric computational entropy are parameterized by quality (how distinguishable is X from a variable Z that has true entropy) and quantity (how much true entropy is there in Z). We prove an intuitively natural result: conditioning on an event of probability p reduces the quality of metric entropy by a factor of p and the quantity of metric entropy by log2 1/p (note that this means that the reduction in quantity and quality is the same, because the quantity of entropy is measured on logarithmic scale). Our result improves previous bounds of Dziembowski and Pietrzak (FOCS 2008), where the loss in the quantity of entropy was related to its original quality. The use of metric entropy simplifies the analogous the result of Reingold et. al. (FOCS 2008) for HILL entropy. Further, we simplify dealing with information leakage by investigating conditional metric entropy. We show that, conditioned on leakage of λ bits, metric entropy gets reduced by a factor 2 in quality and λ in quantity. Our formulation allow us to formulate a “chain rule” for leakage on computational entropy. We show that conditioning on λ bits of leakage reduces conditional metric entropy by λ bits. This is the same loss as leaking from unconditional metric entropy. This result makes it easy to measure entropy even after several rounds of information leakage.",
"title": ""
},
{
"docid": "3d93c45e2374a7545c6dff7de0714352",
"text": "Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interest. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest in object and the interest in the link structure of objects. Experiments with news-scale text data show that the interest in object and the interest in link structure have real requirements, and it is effective to recommend texts according to the angles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f741eb8ca9fb9798fb89674a0e045de9",
"text": "We investigate the issue of model uncertainty in cross-country growth regressions using Bayesian Model Averaging (BMA). We find that the posterior probability is very spread among many models suggesting the superiority of BMA over choosing any single model. Out-of-sample predictive results support this claim. In contrast with Levine and Renelt (1992), our results broadly support the more “optimistic” conclusion of Sala-i-Martin (1997b), namely that some variables are important regressors for explaining cross-country growth patterns. However, care should be taken in the methodology employed. The approach proposed here is firmly grounded in statistical theory and immediately leads to posterior and predictive inference.",
"title": ""
},
{
"docid": "9489ca5b460842d5a8a65504965f0bd5",
"text": "This article, based on a tutorial the author presented at ITC 2008, is an overview and introduction to mixed-signal production test. The article focuses on the fundamental techniques and procedures in production test and explores key issues confronting the industry.",
"title": ""
},
{
"docid": "ba1b3fb5f147b5af173e5f643a2794e0",
"text": "The objective of this study is to examine how personal factors such as lifestyle, personality, and economic situations affect the consumer behavior of Malaysian university students. A quantitative approach was adopted and a self-administered questionnaire was distributed to collect data from university students. Findings illustrate that ‘personality’ influences the consumer behavior among Malaysian university student. This study also noted that the economic situation had a negative relationship with consumer behavior. Findings of this study improve our understanding of consumer behavior of Malaysian University Students. The findings of this study provide valuable insights in identifying and taking steps to improve on the services, ambience, and needs of the student segment of the Malaysian market.",
"title": ""
},
{
"docid": "e1a4e8b8c892f1e26b698cd9fd37c3db",
"text": "Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework.",
"title": ""
},
{
"docid": "830588b6ff02a05b4d76b58a3e4e7c44",
"text": "The integration of GIS and multicriteria decision analysis has attracted significant interest over the last 15 years or so. This paper surveys the GISbased multicriteria decision analysis (GIS-MCDA) approaches using a literature review and classification of articles from 1990 to 2004. An electronic search indicated that over 300 articles appeared in refereed journals. The paper provides taxonomy of those articles and identifies trends and developments in GISMCDA.",
"title": ""
},
{
"docid": "8d43d25619bd80d564c7c32d2592c4ac",
"text": "Feature selection and dimensionality reduction are important steps in pattern recognition. In this paper, we propose a scheme for feature selection using linear independent component analysis and mutual information maximization method. The method is theoretically motivated by the fact that the classification error rate is related to the mutual information between the feature vectors and the class labels. The feasibility of the principle is illustrated on a synthetic dataset and its performance is demonstrated using EEG signal classification. Experimental results show that this method works well for feature selection.",
"title": ""
},
{
"docid": "33c89872c2a1e5b1b2417c58af616560",
"text": "We provide a new proof of the linear convergence of the alternating direction method of multipliers (ADMM) when one of the objective terms is strongly convex. Our proof is based on a framework for analyzing optimization algorithms introduced in Lessard et al. [21], reducing algorithm convergence to verifying the stability of a dynamical system. This approach generalizes a number of existing results and obviates any assumptions about specific choices of algorithm parameters. On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence.",
"title": ""
},
{
"docid": "35a2d7f4b48ffa57951f4c32175dd521",
"text": "This paper introduces the settlement generation competition for Minecraft, the first part of the Generative Design in Minecraft challenge. The settlement generation competition is about creating Artificial Intelligence (AI) agents that can produce functional, aesthetically appealing and believable settlements adapted to a given Minecraft map---ideally at a level that can compete with human created designs. The aim of the competition is to advance procedural content generation for games, especially in overcoming the challenges of adaptive and holistic PCG. The paper introduces the technical details of the challenge, but mostly focuses on what challenges this competition provides and why they are scientifically relevant.",
"title": ""
},
{
"docid": "4958f4a85b531a2d5a846d1f6eb1a5a3",
"text": "The n-channel lateral double-diffused metal-oxide- semiconductor (nLDMOS) devices in high-voltage (HV) technologies are known to have poor electrostatic discharge (ESD) robustness. To improve the ESD robustness of nLDMOS, a co-design method combining a new waffle layout structure and a trigger circuit is proposed to fulfill the body current injection technique in this work. The proposed layout and circuit co-design method on HV nLDMOS has successfully been verified in a 0.5-¿m 16-V bipolar-CMOS-DMOS (BCD) process and a 0.35- ¿m 24-V BCD process without using additional process modification. Experimental results through transmission line pulse measurement and failure analyses have shown that the proposed body current injection technique can significantly improve the ESD robustness of HV nLDMOS.",
"title": ""
}
] |
scidocsrr
|
5464eda4baec792897d13e706bc05479
|
Barzilai-Borwein Step Size for Stochastic Gradient Descent
|
[
{
"docid": "34459005eaf3a5e5bc9e467ecdf9421c",
"text": "for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a first-order iterative method called “shrinkage” yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the l1-norm ‖x‖1 to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic two-stage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to μ. This code exhibits state-of-the-art performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.",
"title": ""
},
{
"docid": "01835769f2dc9391051869374e200a6a",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
}
] |
[
{
"docid": "0666baa7be39ef1887c7f8ce04aaa957",
"text": "BACKGROUND\nEnsuring health worker job satisfaction and motivation are important if health workers are to be retained and effectively deliver health services in many developing countries, whether they work in the public or private sector. The objectives of the paper are to identify important aspects of health worker satisfaction and motivation in two Indian states working in public and private sectors.\n\n\nMETHODS\nCross-sectional surveys of 1916 public and private sector health workers in Andhra Pradesh and Uttar Pradesh, India, were conducted using a standardized instrument to identify health workers' satisfaction with key work factors related to motivation. Ratings were compared with how important health workers consider these factors.\n\n\nRESULTS\nThere was high variability in the ratings for areas of satisfaction and motivation across the different practice settings, but there were also commonalities. Four groups of factors were identified, with those relating to job content and work environment viewed as the most important characteristics of the ideal job, and rated higher than a good income. In both states, public sector health workers rated \"good employment benefits\" as significantly more important than private sector workers, as well as a \"superior who recognizes work\". There were large differences in whether these factors were considered present on the job, particularly between public and private sector health workers in Uttar Pradesh, where the public sector fared consistently lower (P < 0.01). Discordance between what motivational factors health workers considered important and their perceptions of actual presence of these factors were also highest in Uttar Pradesh in the public sector, where all 17 items had greater discordance for public sector workers than for workers in the private sector (P < 0.001).\n\n\nCONCLUSION\nThere are common areas of health worker motivation that should be considered by managers and policy makers, particularly the importance of non-financial motivators such as working environment and skill development opportunities. But managers also need to focus on the importance of locally assessing conditions and managing incentives to ensure health workers are motivated in their work.",
"title": ""
},
{
"docid": "126b62a0ae62c76b43b4fb49f1bf05cd",
"text": "OBJECTIVE\nThe aim of the study was to evaluate efficacy of fractional CO2 vaginal laser treatment (Laser, L) and compare it to local estrogen therapy (Estriol, E) and the combination of both treatments (Laser + Estriol, LE) in the treatment of vulvovaginal atrophy (VVA).\n\n\nMETHODS\nA total of 45 postmenopausal women meeting inclusion criteria were randomized in L, E, or LE groups. Assessments at baseline, 8 and 20 weeks, were conducted using Vaginal Health Index (VHI), Visual Analog Scale for VVA symptoms (dyspareunia, dryness, and burning), Female Sexual Function Index, and maturation value (MV) of Meisels.\n\n\nRESULTS\nForty-five women were included and 3 women were lost to follow-up. VHI average score was significantly higher at weeks 8 and 20 in all study arms. At week 20, the LE arm also showed incremental improvement of VHI score (P = 0.01). L and LE groups showed a significant improvement of dyspareunia, burning, and dryness, and the E arm only of dryness (P < 0.001). LE group presented significant improvement of total Female Sex Function Index (FSFI) score (P = 0.02) and individual domains of pain, desire, and lubrication. In contrast, the L group showed significant worsening of pain domain in FSFI (P = 0.04), but FSFI total scores were comparable in all treatment arms at week 20.\n\n\nCONCLUSIONS\nCO2 vaginal laser alone or in combination with topical estriol is a good treatment option for VVA symptoms. Sexual-related pain with vaginal laser treatment might be of concern.",
"title": ""
},
{
"docid": "7c593a9fc4de5beb89022f7d438ffcb8",
"text": "The design of a low power low drop out voltage regulator with no off-chip capacitor and fast transient responses is presented in this paper. The LDO regulator uses a combination of a low power operational trans-conductance amplifier and comparators to drive the gate of the PMOS pass element. The amplifier ensures stability and accurate setting of the output voltage in addition to power supply rejection. The comparators ensure fast response of the regulator to any load or line transients. A settling time of less than 200ns is achieved in response to a load transient step of 50mA with a rise time of 100ns with an output voltage spike of less than 200mV at an output voltage of 3.25 V. A line transient step of 1V with a rise time of 100ns results also in a settling time of less than 400ns with a voltage spike of less than 100mV when the output voltage is 2.6V. The regulator is fabricated using a standard 0.35μm CMOS process and consumes a quiescent current of only 26 μA.",
"title": ""
},
{
"docid": "5d1059849fccf79d87be7df722475d8f",
"text": "This study provides operational guidance for using naïve Bayes Bayesian network (BN) models in bankruptcy prediction. First, we suggest a heuristic method that guides the selection of bankruptcy predictors from a pool of potential variables. The method is based upon the assumption that the joint distribution of the variables is multivariate normal. Variables are selected based upon correlations and partial correlations information. A naïve Bayes model is developed using the proposed heuristic method and is found to perform well based upon a tenfold analysis, for both samples with complete information and samples with incomplete information. Second, we analyze whether the number of states into which continuous variables are discretized has an impact on a naïve Bayes model performance in bankruptcy prediction. We compare the model’s performance when continuous variables are discretized into two, three, ..., ten, fifteen, and twenty states. Based upon a relatively large training sample, our results show that the naïve Bayes model’s performance increases when the number of states for discretization increases from two to three, and from three to four. Surprisingly, when the number of states increases to more than four, the model’s overall performance neither increases nor decreases. It is possible that the relative large size of training sample used by this study prevents the phenomenon of over fitting from occurring. Finally, we experiment whether modeling continuous variables with continuous distributions instead of discretizing them can improve the naïve Bayes model’s performance. Our finding suggests that this is not true. One possible reason is that continuous distributions tested by this study do not represent well the underlying distributions of empirical data. More importantly, some results of this study could also benefit the implementation of naïve Bayes models in business decision contexts other than bankruptcy prediction.",
"title": ""
},
{
"docid": "19361b2d5e096f26e650b25b745e5483",
"text": "Multispectral pedestrian detection has attracted increasing attention from the research community due to its crucial competence for many around-the-clock applications (e.g., video surveillance and autonomous driving), especially under insufficient illumination conditions. We create a human baseline over the KAIST dataset and reveal that there is still a large gap between current top detectors and human performance. To narrow this gap, we propose a network fusion architecture, which consists of a multispectral proposal network to generate pedestrian proposals, and a subsequent multispectral classification network to distinguish pedestrian instances from hard negatives. The unified network is learned by jointly optimizing pedestrian detection and semantic segmentation tasks. The final detections are obtained by integrating the outputs from different modalities as well as the two stages. The approach significantly outperforms state-of-the-art methods on the KAIST dataset while remain fast. Additionally, we contribute a sanitized version of training annotations for the KAIST dataset, and examine the effects caused by different kinds of annotation errors. Future research of this problem will benefit from the sanitized version which eliminates the interference of annotation errors.",
"title": ""
},
{
"docid": "1593fd6f9492adc851c709e3dd9b3c5f",
"text": "This paper addresses the problem of extracting keyphrases from scientific articles and categorizing them as corresponding to a task, process, or material. We cast the problem as sequence tagging and introduce semi-supervised methods to a neural tagging model, which builds on recent advances in named entity recognition. Since annotated training data is scarce in this domain, we introduce a graph-based semi-supervised algorithm together with a data selection scheme to leverage unannotated articles. Both inductive and transductive semi-supervised learning strategies outperform state-of-the-art information extraction performance on the 2017 SemEval Task 10 ScienceIE task.",
"title": ""
},
{
"docid": "0fc5441a3e8589b1bd15d56830c4ef79",
"text": "DevOps is an emerging paradigm to actively foster the collaboration between system developers and operations in order to enable efficient end-to-end automation of software deployment and management processes. DevOps is typically combined with Cloud computing, which enables rapid, on-demand provisioning of underlying resources such as virtual servers, storage, or database instances using APIs in a self-service manner. Today, an ever-growing amount of DevOps tools, reusable artifacts such as scripts, and Cloud services are available to implement DevOps automation. Thus, informed decision making on the appropriate approach (es) for the needs of an application is hard. In this work we present a collaborative and holistic approach to capture DevOps knowledge in a knowledgebase. Beside the ability to capture expert knowledge and utilize crowd sourcing approaches, we implemented a crawling framework to automatically discover and capture DevOps knowledge. Moreover, we show how this knowledge is utilized to deploy and operate Cloud applications.",
"title": ""
},
{
"docid": "8c853251e0fb408c829e6f99a581d4cf",
"text": "We consider a simple and overarching representation for permutation-invariant functions of sequences (or set functions). Our approach, which we call Janossy pooling, expresses a permutation-invariant function as the average of a permutation-sensitive function applied to all reorderings of the input sequence. This allows us to leverage the rich and mature literature on permutation-sensitive functions to construct novel and flexible permutation-invariant functions. If carried out naively, Janossy pooling can be computationally prohibitive. To allow computational tractability, we consider three kinds of approximations: canonical orderings of sequences, functions with k-order interactions, and stochastic optimization algorithms with random permutations. Our framework unifies a variety of existing work in the literature, and suggests possible modeling and algorithmic extensions. We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.",
"title": ""
},
{
"docid": "c65050bb98a071fa8b60fa262536a476",
"text": "Proliferative periostitis is a pathologic lesion that displays an osteo-productive and proliferative inflammatory response of the periosteum to infection or other irritation. This lesion is a form of chronic osteomyelitis that is often asymptomatic, occurring primarily in children, and found only in the mandible. The lesion can be odontogenic or non-odontogenic in nature. A 12 year-old boy presented with an unusual odontogenic proliferative periostitis that originated from the lower left first molar, however, the radiographic radiolucent area and proliferative response were discovered at the apices of the lower left second molar. The periostitis was treated by single-visit non-surgical endodontic treatment of lower left first molar without antibiotic therapy. The patient has been recalled regularly; the lesion had significantly reduced in size 3-months postoperatively. Extraoral symmetry occurred at approximately one year recall. At the last visit, 2 years after initial treatment, no problems or signs of complications have occurred; the radiographic examination revealed complete resolution of the apical lesion and apical closure of the lower left second molar. Odontogenic proliferative periostitis can be observed at the adjacent normal tooth. Besides, this case demonstrates that non-surgical endodontics is a viable treatment option for management of odontogenic proliferative periostitis.",
"title": ""
},
{
"docid": "9c799b4d771c724969be7b392697ebee",
"text": "Search engines need to model user satisfaction to improve their services. Since it is not practical to request feedback on searchers' perceptions and search outcomes directly from users, search engines must estimate satisfaction from behavioral signals such as query refinement, result clicks, and dwell times. This analysis of behavior in the aggregate leads to the development of global metrics such as satisfied result clickthrough (typically operationalized as result-page clicks with dwell time exceeding a particular threshold) that are then applied to all searchers' behavior to estimate satisfac-tion levels. However, satisfaction is a personal belief and how users behave when they are satisfied can also differ. In this paper we verify that searcher behavior when satisfied and dissatisfied is indeed different among individual searchers along a number of dimensions. As a result, we introduce and evaluate learned models of satisfaction for individual searchers and searcher cohorts. Through experimentation via logs from a large commercial Web search engine, we show that our proposed models can predict search satisfaction more accurately than a global baseline that applies the same satisfaction model across all users. Our findings have implications for the study and application of user satisfaction in search systems.",
"title": ""
},
{
"docid": "ca3a0e7bca08fc943d432179766f4ccf",
"text": "BACKGROUND\nMost errors in a clinical chemistry laboratory are due to preanalytical errors. Preanalytical variability of biospecimens can have significant effects on downstream analyses, and controlling such variables is therefore fundamental for the future use of biospecimens in personalized medicine for diagnostic or prognostic purposes.\n\n\nCONTENT\nThe focus of this review is to examine the preanalytical variables that affect human biospecimen integrity in biobanking, with a special focus on blood, saliva, and urine. Cost efficiency is discussed in relation to these issues.\n\n\nSUMMARY\nThe quality of a study will depend on the integrity of the biospecimens. Preanalytical preparations should be planned with consideration of the effect on downstream analyses. Currently such preanalytical variables are not routinely documented in the biospecimen research literature. Future studies using biobanked biospecimens should describe in detail the preanalytical handling of biospecimens and analyze and interpret the results with regard to the effects of these variables.",
"title": ""
},
{
"docid": "28a69b2e02ca56c6ca867749b2129295",
"text": "The popular view of software engineering focuses on managing teams of people to produce large systems. This paper addresses a different angle of software engineering, that of development for re-use and portability. We consider how an essential part of most software products - the user interface - can be successfully engineered so that it can be portable across multiple platforms and on multiple devices. Our research has identified the structure of the problem domain, and we have filled in some of the answers. We investigate promising solutions from the model-driven frameworks of the 1990s, to modern XML-based specification notations (Views, XUL, XIML, XAML), multi-platform toolkits (Qt and Gtk), and our new work, Mirrors which pioneers reflective libraries. The methodology on which Views and Mirrors is based enables existing GUI libraries to be transported to new operating systems. The paper also identifies cross-cutting challenges related to education, standardization and the impact of mobile and tangible devices on the future design of UIs. This paper seeks to position user interface construction as an important challenge in software engineering, worthy of ongoing research.",
"title": ""
},
{
"docid": "ac2e1a27ae05819d213efe7d51d1b988",
"text": "Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/power-constrained scenarios. Therefore, such systems need to support not only the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.",
"title": ""
},
{
"docid": "8c221ad31eda07f1628c3003a8c12724",
"text": "This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low-dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.",
"title": ""
},
{
"docid": "748b470bfbd62b5ddf747e3ef989e66d",
"text": "Purpose – This paper sets out to integrate research on knowledge management with the dynamic capabilities approach. This paper will add to the understanding of dynamic capabilities by demonstrating that dynamic capabilities can be seen as composed of concrete and well-known knowledge management activities. Design/methodology/approach – This paper is based on a literature review focusing on key knowledge management processes and activities as well as the concept of dynamic capabilities, the paper connects these two approaches. The analysis is centered on knowledge management activities which then are compiled into dynamic capabilities. Findings – In the paper eight knowledge management activities are identified; knowledge creation, acquisition, capture, assembly, sharing, integration, leverage, and exploitation. These activities are assembled into the three dynamic capabilities of knowledge development, knowledge (re)combination, and knowledge use. The dynamic capabilities and the associated knowledge management activities create flows to and from the firm’s stock of knowledge and they support the creation and use of organizational capabilities. Practical implications – The findings in the paper demonstrate that the somewhat elusive concept of dynamic capabilities can be untangled through the use of knowledge management activities. Practicing managers struggling with the operationalization of dynamic capabilities should instead focus on the contributing knowledge management activities in order to operationalize and utilize the concept of dynamic capabilities. Originality/value – The paper demonstrates that the existing research on knowledge management can be a key contributor to increasing our understanding of dynamic capabilities. This finding is valuable for both researchers and practitioners.",
"title": ""
},
{
"docid": "3550dbe913466a675b621d476baba219",
"text": "Successful implementing and managing of change is urgently necessary for each adult educational organization. During the process, leading of the staff is becoming a key condition and the most significant factor. Beside certain personal traits of the leader, change management demands also certain leadership knowledges, skills, versatilities and behaviour which may even border on changing the organizational culture. The paper finds the significance of certain values and of organizational climate and above all the significance of leadership style which a leader will adjust to the staff and to the circumstances. The author presents a multiple qualitative case study of managing change in three adult educational organizations. The paper finds that factors of successful leading of change exist which represent an adequate approach to leading the staff during the introduction of changes in educational organizations. Its originality/value is in providing information on the important relationship between culture, leadership styles and leader’s behaviour as preconditions for successful implementing and managing of strategic change.",
"title": ""
},
{
"docid": "891bf46e2ad56387c4cf250ad3f0af08",
"text": "r 200 3 lmaden. Summary The creation of value is the core purpose and central process of economic exchange. Traditional models of value creation focus on the firm’s output and price. We present an alternative perspective, one representing the intersection of two growing streams of thought, service science and service-dominant (S-D) logic. We take the view that (1) service, the application of competences (such as knowledge and skills) by one party for the benefit of another, is the underlying basis of exchange; (2) the proper unit of analysis for service-for-service exchange is the service system, which is a configuration of resources (including people, information, and technology) connected to other systems by value propositions; and (3) service science is the study of service systems and of the cocreation of value within complex configurations of resources. We argue that value is fundamentally derived and determined in use – the integration and application of resources in a specific context – rather than in exchange – embedded in firm output and captured by price. Service systems interact through mutual service exchange relationships, improving the adaptability and survivability of all service systems engaged in exchange, by allowing integration of resources that are mutually beneficial. This argument has implications for advancing service science by identifying research questions regarding configurations and processes of value co-creation and measurements of value-in-use, and by developing its ties with economics and other service-oriented disciplines. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c0d722d72955dd1ec6df3cc24289979f",
"text": "Citing classic psychological research and a smattering of recent studies, Kassin, Dror, and Kukucka (2013) proposed the operation of a forensic confirmation bias, whereby preexisting expectations guide the evaluation of forensic evidence in a self-verifying manner. In a series of studies, we tested the hypothesis that knowing that a defendant had confessed would taint people's evaluations of handwriting evidence relative to those not so informed. In Study 1, participants who read a case summary in which the defendant had previously confessed were more likely to erroneously conclude that handwriting samples from the defendant and perpetrator were authored by the same person, and were more likely to judge the defendant guilty, compared with those in a no-confession control group. Study 2 replicated and extended these findings using a within-subjects design in which participants rated the same samples both before and after reading a case summary. These findings underscore recent critiques of the forensic sciences as subject to bias, and suggest the value of insulating forensic examiners from contextual information.",
"title": ""
},
{
"docid": "2802db74e062103d45143e8e9ad71890",
"text": "Maritime traffic monitoring is an important aspect of safety and security, particularly in close to port operations. While there is a large amount of data with variable quality, decision makers need reliable information about possible situations or threats. To address this requirement, we propose extraction of normal ship trajectory patterns that builds clusters using, besides ship tracing data, the publicly available International Maritime Organization (IMO) rules. The main result of clustering is a set of generated lanes that can be mapped to those defined in the IMO directives. Since the model also takes non-spatial attributes (speed and direction) into account, the results allow decision makers to detect abnormal patterns - vessels that do not obey the normal lanes or sail with higher or lower speeds.",
"title": ""
},
{
"docid": "bfa87a59940f6848d8d5b53b89c16735",
"text": "The over-segmentation of images into atomic regions has become a standard and powerful tool in Vision. Traditional superpixel methods, that operate at the pixel level, cannot directly capture the geometric information disseminated into the images. We propose an alternative to these methods by operating at the level of geometric shapes. Our algorithm partitions images into convex polygons. It presents several interesting properties in terms of geometric guarantees, region compactness and scalability. The overall strategy consists in building a Voronoi diagram that conforms to preliminarily detected line-segments, before homogenizing the partition by spatial point process distributed over the image gradient. Our method is particularly adapted to images with strong geometric signatures, typically man-made objects and environments. We show the potential of our approach with experiments on large-scale images and comparisons with state-of-the-art superpixel methods.",
"title": ""
}
] |
scidocsrr
|
12a0d321afbdbe6c5dac5f676d9ea587
|
Multi-objective Architecture Search for CNNs
|
[
{
"docid": "af25bc1266003202d3448c098628aee8",
"text": "Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR10, CIFAR-100, and SVHN datasets, yielding new state-ofthe-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code available at https://github.com/ uoguelph-mlrg/Cutout.",
"title": ""
}
] |
[
{
"docid": "0b56f9c9ec0ce1db8dcbfd2830b2536b",
"text": "In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently expressed as latent variable models. We propose latent Bayesian melding, which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework. In a case study on electricity disaggregation, which is a type of singlechannel blind source separation problem, we show that latent Bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching.",
"title": ""
},
{
"docid": "2ec0db3840965993e857b75bd87a43b7",
"text": "Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution.\n In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.",
"title": ""
},
{
"docid": "3c14ce0d697c69f554a842c1dc997d66",
"text": "We propose a novel segmentation approach based on deep convolutional encoder networks and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that has both convolutional and deconvolutional layers, and combines feature extraction and segmentation prediction in a single model. The joint training of the feature extraction and prediction layers allows the model to automatically learn features that are optimized for accuracy for any given combination of image types. In contrast to existing automatic feature learning approaches, which are typically patch-based, our model learns features from entire images, which eliminates patch selection and redundant calculations at the overlap of neighboring patches and thereby speeds up the training. Our network also uses a novel objective function that works well for segmenting underrepresented classes, such as MS lesions. We have evaluated our method on the publicly available labeled cases from the MS lesion segmentation challenge 2008 data set, showing that our method performs comparably to the state-of-theart. In addition, we have evaluated our method on the images of 500 subjects from an MS clinical trial and varied the number of training samples from 5 to 250 to show that the segmentation performance can be greatly improved by having a representative data set.",
"title": ""
},
{
"docid": "6ffbb212bec4c90c6b37a9fde3fd0b4c",
"text": "In this paper, we address a new research problem on active learning from data streams where data volumes grow continuously and labeling all data is considered expensive and impractical. The objective is to label a small portion of stream data from which a model is derived to predict newly arrived instances as accurate as possible. In order to tackle the challenges raised by data streams' dynamic nature, we propose a classifier ensembling based active learning framework which selectively labels instances from data streams to build an accurate classifier. A minimal variance principle is introduced to guide instance labeling from data streams. In addition, a weight updating rule is derived to ensure that our instance labeling process can adaptively adjust to dynamic drifting concepts in the data. Experimental results on synthetic and real-world data demonstrate the performances of the proposed efforts in comparison with other simple approaches.",
"title": ""
},
{
"docid": "cb25c3d33e6a4544ec1e938919566caa",
"text": "Context: Systematic Review (SR) is a methodology used to find and aggregate relevant existing evidence about a specific research topic of interest. It can be very time-consuming depending on the number of gathered studies that need to be analyzed by researchers. One of the relevant tools found in the literature and preliminarily evaluated by researchers of SRs is StArt, which supports the whole SR process. It has been downloaded by users from more than twenty countries. Objective: To present new features available in StArt to support SR activities. Method: Based on users' feedback and the literature, new features were implemented and are available in the tool, like the SCAS strategy, snowballing techniques, the frequency of keywords and a word cloud for search string refining, collaboration among reviewers, and the StArt online community. Results: The new features, according to users' positive feedback, make the tool more robust to support the conduct of SRs. Conclusion: StArt is a tool that has been continuously developed such that new features are often available to improve the support for the SR process. The StArt online community can improve the interaction among users, facilitating the identification of improvements and new useful features.",
"title": ""
},
{
"docid": "50840b0308e1f884b61c9f824b1bf17f",
"text": "The StreamIt programming model has been proposed to exploit parallelism in streaming applications on general purpose multi-core architectures. This model allows programmers to specify the structure of a program as a set of filters that act upon data, and a set of communication channels between them. The StreamIt graphs describe task, data and pipeline parallelism which can be exploited on modern Graphics Processing Units (GPUs), as they support abundant parallelism in hardware. In this paper, we describe the challenges in mapping StreamIt to GPUs and propose an efficient technique to software pipeline the execution of stream programs on GPUs. We formulate this problem --- both scheduling and assignment of filters to processors --- as an efficient Integer Linear Program (ILP), which is then solved using ILP solvers. We also describe a novel buffer layout technique for GPUs which facilitates exploiting the high memory bandwidth available in GPUs. The proposed scheduling utilizes both the scalar units in GPU, to exploit data parallelism, and multiprocessors, to exploit task and pipeline parallelism. Further it takes into consideration the synchronization and bandwidth limitations of GPUs, and yields speedups between 1.87X and 36.83X over a single threaded CPU.",
"title": ""
},
{
"docid": "3ae8865602c53847a0eec298c698a743",
"text": "BACKGROUND\nA low ratio of utilization of healthcare services in postpartum women may contribute to maternal deaths during the postpartum period. The maternal mortality ratio is high in the Philippines. The aim of this study was to examine the current utilization of healthcare services and the effects on the health of women in the Philippines who delivered at home.\n\n\nMETHODS\nThis was a cross-sectional analytical study, based on a self-administrated questionnaire, conducted from March 2015 to February 2016 in Muntinlupa, Philippines. Sixty-three postpartum women who delivered at home or at a facility were enrolled for this study. A questionnaire containing questions regarding characteristics, utilization of healthcare services, and abnormal symptoms during postpartum period was administered. To analyze the questionnaire data, the sample was divided into delivery at home and delivery at a facility. Chi-square test, Fisher's exact test, and Mann-Whitney U test were used.\n\n\nRESULTS\nThere were significant differences in the type of birth attendant, area of residence, monthly income, and maternal and child health book usage between women who delivered at home and those who delivered at a facility (P<0.01). There was significant difference in the utilization of antenatal checkup (P<0.01) during pregnancy, whilst there was no significant difference in utilization of healthcare services during the postpartum period. Women who delivered at home were more likely to experience feeling of irritated eyes and headaches, and continuous abdominal pain (P<0.05).\n\n\nCONCLUSION\nFinancial and environmental barriers might hinder the utilization of healthcare services by women who deliver at home in the Philippines. Low utilization of healthcare services in women who deliver at home might result in more frequent abnormal symptoms during postpartum.",
"title": ""
},
{
"docid": "1e8e4364427d18406594af9ad3a73a28",
"text": "The Internet Addiction Scale (IAS) is a self-report instrument based on the 7 Diagnostic and Statistical Manual of Mental Disorders (4th ed.; American Psychiatric Association, 1994) substance dependence criteria and 2 additional criteria recommended by Griffiths (1998). The IAS was administered to 233 undergraduates along with 4 measures pertaining to loneliness and boredom proneness. An item reliability analysis reduced the initial scale from 36 to 31 items (with a Cronbach's alpha of .95). A principal-components analysis indicated that the IAS consisted mainly of one factor. Multiple regression analyses revealed that Family and Social Loneliness and Boredom Proneness were significantly correlated with the IAS; Family and Social Loneliness uniquely predicted IAS scores. No evidence for widespread Internet addiction was found.",
"title": ""
},
{
"docid": "5093e3d152d053a9f3322b34096d3e4e",
"text": "To create conversational systems working in actual situations, it is crucial to assume that they interact with multiple agents. In this work, we tackle addressee and response selection for multi-party conversation, in which systems are expected to select whom they address as well as what they say. The key challenge of this task is to jointly model who is talking about what in a previous context. For the joint modeling, we propose two modeling frameworks: 1) static modeling and 2) dynamic modeling. To show benchmark results of our frameworks, we created a multi-party conversation corpus. Our experiments on the dataset show that the recurrent neural network based models of our frameworks robustly predict addressees and responses in conversations with a large number of agents.",
"title": ""
},
{
"docid": "18b7c2a57ab593810574a6975d6dc72e",
"text": "Explored the factors that influence knowledge and attitudes toward anemia in pregnancy (AIP) in southeastern Nigeria. We surveyed 1500 randomly selected women who delivered babies within 6 months of the survey using a questionnaire. Twelve focus group discussions were held with the grandmothers and fathers of the new babies, respectively. Six in-depth interviews were held with health workers in the study communities. Awareness of AIP was high. Knowledge of its prevention and management was poor with a median score of 10 points on a 50-point scale. Living close to a health facility (p = 0.031), having post-secondary education (p <0.001), being in paid employment (p = 0.017) and being older (p = 0.027) influenced knowledge of AIP. Practices for the prevention and management of AIP were affected by a high level of education (p = 0.034) and having good knowledge of AIP issues (p <0.001). The qualitative data revealed that unorthodox means were employed in response to anemia in pregnancy. This is often delayed until complications set in. Many viewed anemia as a normal phenomenon among pregnant women. AIP awareness is high among the populations. However, management is poor because of poor knowledge of signs and timely appropriate treatment. Prompt and appropriate management of AIP is germane for positive pregnancy outcomes. Anemia-related public education is an urgent need in Southeast Nigeria. Extra consideration of the diverse social development levels of the populations should be taken into account when designing new and improving current prevention and management programs for anemia in pregnancy.",
"title": ""
},
{
"docid": "ec4dcce4f53e38909be438beeb62b1df",
"text": " A very efficient protocol for plant regeneration from two commercial Humulus lupulus L. (hop) cultivars, Brewers Gold and Nugget has been established, and the morphogenetic potential of explants cultured on Adams modified medium supplemented with several concentrations of cytokinins and auxins studied. Zeatin at 4.56 μm produced direct caulogenesis and caulogenic calli in both cultivars. Subculture of these calli on Adams modified medium supplemented with benzylaminopurine (4.4 μm) and indolebutyric acid (0.49 μm) promoted shoot regeneration which gradually increased up to the third subculture. Regeneration rates of 60 and 29% were achieved for Nugget and Brewers Gold, respectively. By selection of callus lines, it has been possible to maintain caulogenic potential for 14 months. Regenerated plants were successfully transferred to field conditions.",
"title": ""
},
{
"docid": "4d5d43c8f8d9bc5753f39e7978b23a0b",
"text": "The future of high-performance computing is likely to rely on the ability to efficiently exploit huge amounts of parallelism. One way of taking advantage of this parallelism is to formulate problems as \"embarrassingly parallel\" Monte-Carlo simulations, which allow applications to achieve a linear speedup over multiple computational nodes, without requiring a super-linear increase in inter-node communication. However, such applications are reliant on a cheap supply of high quality random numbers, particularly for the three main maximum entropy distributions: uniform, used as a general source of randomness; Gaussian, for discrete-time simulations; and exponential, for discrete-event simulations. In this paper we look at four different types of platform: conventional multi-core CPUs (Intel Core2); GPUs (NVidia GTX 200); FPGAs (Xilinx Virtex-5); and Massively Parallel Processor Arrays (Ambric AM2000). For each platform we determine the most appropriate algorithm for generating each type of number, then calculate the peak generation rate and estimated power efficiency for each device.",
"title": ""
},
{
"docid": "ca2cc9e21fd1aacc345238c1d609bedf",
"text": "The aim of the present study was to evaluate the long-term effect of implants installed in different dental areas in adolescents. The sample consisted of 18 subjects with missing teeth (congenital absence or trauma). The patients were of different chronological ages (between 13 and 17 years) and of different skeletal maturation. In all subjects, the existing permanent teeth were fully erupted. In 15 patients, 29 single implants (using the Brånemark technique) were installed to replace premolars, canines, and upper incisors. In three patients with extensive aplasia, 18 implants were placed in various regions. The patients were followed during a 10-year period, the first four years annually and then every second year. Photographs, study casts, peri-apical radiographs, lateral cephalograms, and body height measurements were recorded at each control. The results show that dental implants are a good treatment option for replacing missing teeth in adolescents, provided that the subject's dental and skeletal development is complete. However, different problems are related to the premolar and the incisor regions, which have to be considered in the total treatment planning. Disadvantages may be related to the upper incisor region, especially for lateral incisors, due to slight continuous eruption of adjacent teeth and craniofacial changes post-adolescence. Periodontal problems may arise, with marginal bone loss around the adjacent teeth and bone loss buccally to the implants. The shorter the distance between the implant and the adjacent teeth, the larger the reduction of marginal bone level. Before placement of the implant sufficient space must be gained in the implant area, and the adjacent teeth uprighted and paralleled, even in the apical area, using non-intrusive movements. In the premolar area, excess space is needed, not only in the mesio-distal, but above all in the bucco-lingual direction. Thus, an infraoccluded lower deciduous molar should be extracted shortly before placement of the implant to avoid reduction of the bucco-lingual bone volume. Oral rehabilitation with implant-supported prosthetic constructions seems to be a good alternative in adolescents with extensive aplasia, provided that craniofacial growth has ceased or is almost complete.",
"title": ""
},
{
"docid": "d9df73b22013f7055fe8ff28f3590daa",
"text": "The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It is therefore quite natural to examine the degree to which a learned network model might act as a viable surrogate for traditional sparse estimation in domains where ample training data is available. While the possibility of a reduced computational budget is readily apparent when a ceiling is imposed on the number of layers, our work primarily focuses on estimation accuracy. In particular, it is well-known that when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse representations. In contrast, we demonstrate both theoretically and empirically the potential for a trained deep network to recover minimal `0-norm representations in regimes where existing methods fail. The resulting system is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers that can disrupt the estimation of surface normals from a 3D scene.",
"title": ""
},
{
"docid": "2c5e8e4025572925e72e9f51db2b3d95",
"text": "This article reveals our work on refactoring plug-ins for Eclipse's C++ Development Tooling (CDT).\n With CDT a reliable open source IDE exists for C/C++ developers. Unfortunately it has been lacking of overarching refactoring support. There used to be just one single refactoring - Rename. But our plug-in provides several new refactorings which support a C++ developer in his everyday work.",
"title": ""
},
{
"docid": "f9b110890c90d48b6d2f84aa419c1598",
"text": "Surprise describes a range of phenomena from unexpected events to behavioral responses. We propose a novel measure of surprise and use it for surprise-driven learning. Our surprise measure takes into account data likelihood as well as the degree of commitment to a belief via the entropy of the belief distribution. We find that surprise-minimizing learning dynamically adjusts the balance between new and old information without the need of knowledge about the temporal statistics of the environment. We apply our framework to a dynamic decision-making task and a maze exploration task. Our surprise-minimizing framework is suitable for learning in complex environments, even if the environment undergoes gradual or sudden changes, and it could eventually provide a framework to study the behavior of humans and animals as they encounter surprising events.",
"title": ""
},
{
"docid": "e43cc845368e69ef1278e7109d4d8d6f",
"text": "Estimating six degrees of freedom poses of a planar object from images is an important problem with numerous applications ranging from robotics to augmented reality. While the state-of-the-art Perspective-n-Point algorithms perform well in pose estimation, the success hinges on whether feature points can be extracted and matched correctly on target objects with rich texture. In this work, we propose a two-step robust direct method for six-dimensional pose estimation that performs accurately on both textured and textureless planar target objects. First, the pose of a planar target object with respect to a calibrated camera is approximately estimated by posing it as a template matching problem. Second, each object pose is refined and disambiguated using a dense alignment scheme. Extensive experiments on both synthetic and real datasets demonstrate that the proposed direct pose estimation algorithm performs favorably against state-of-the-art feature-based approaches in terms of robustness and accuracy under varying conditions. Furthermore, we show that the proposed dense alignment scheme can also be used for accurate pose tracking in video sequences.",
"title": ""
},
{
"docid": "27f1f3791b7a381f92833d4983620b7e",
"text": "Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.",
"title": ""
},
{
"docid": "a29d666fe1135bb60a75f1cecf85e31c",
"text": "Approximate computing aims for efficient execution of workflows where an approximate output is sufficient instead of the exact output. The idea behind approximate computing is to compute over a representative sample instead of the entire input dataset. Thus, approximate computing — based on the chosen sample size — can make a systematic trade-off between the output accuracy and computation efficiency. Unfortunately, the state-of-the-art systems for approximate computing primarily target batch analytics, where the input data remains unchanged during the course of sampling. Thus, they are not well-suited for stream analytics. This motivated the design of StreamApprox— a stream analytics system for approximate computing. To realize this idea, we designed an online stratified reservoir sampling algorithm to produce approximate outputwith rigorous error bounds. Importantly, our proposed algorithm is generic and can be applied to two prominent types of stream processing systems: (1) batched stream processing such asApache Spark Streaming, and (2) pipelined stream processing such as Apache Flink. To showcase the effectiveness of our algorithm,we implemented StreamApprox as a fully functional prototype based on Apache Spark Streaming and Apache Flink. We evaluated StreamApprox using a set of microbenchmarks and real-world case studies. Our results show that Sparkand Flink-based StreamApprox systems achieve a speedup of 1.15×—3× compared to the respective native Spark Streaming and Flink executions, with varying sampling fraction of 80% to 10%. Furthermore, we have also implemented an improved baseline in addition to the native execution baseline — a Spark-based approximate computing system leveraging the existing sampling modules in Apache Spark. Compared to the improved baseline, our results show that StreamApprox achieves a speedup 1.1×—2.4× while maintaining the same accuracy level. This technical report is an extended version of our conference publication [39].",
"title": ""
},
{
"docid": "9bc1d596de6471e23bd678febe7d962d",
"text": "Identifying paraphrase in Malayalam language is difficult task because it is a highly agglutinative language and the linguistic structure in Malayalam language is complex compared to other languages. Here we use individual words synonyms to find the similarity between two sentences. In this paper, cosine similarity method is used to find the paraphrases in Malayalam language. In this paper we present the observations on sentence similarity between two Malayalam sentences using cosine similarity method, we used test data of 900 and 1400 sentence pairs of FIRE 2016 Malayalam corpus that used in two iterations to present and obtained an accuracy of 0.8 and 0.59.",
"title": ""
}
] |
scidocsrr
|
c60fb0a942c51ee8af163e87d5cd7965
|
"Breaking" Disasters: Predicting and Characterizing the Global News Value of Natural and Man-made Disasters
|
[
{
"docid": "2116414a3e7996d4701b9003a6ccfd15",
"text": "Informal genres such as tweets provide large quantities of data in real time, which can be exploited to obtain, through ranking and classification, a succinct summary of the events that occurred. Previous work on tweet ranking and classification mainly focused on salience and social network features or rely on web documents such as online news articles. In this paper, we exploit language independent journalism and content based features to identify news from tweets. We propose a novel newsworthiness classifier trained through active learning and investigate human assessment and automatic methods to encode it on both the tweet and trending topic levels. Our findings show that content and journalism based features proved to be effective for ranking and classifying content on Twitter.",
"title": ""
},
{
"docid": "1274ab286b1e3c5701ebb73adc77109f",
"text": "In this paper, we propose the first real time rumor debunking algorithm for Twitter. We use cues from 'wisdom of the crowds', that is, the aggregate 'common sense' and investigative journalism of Twitter users. We concentrate on identification of a rumor as an event that may comprise of one or more conflicting microblogs. We continue monitoring the rumor event and generate real time updates dynamically based on any additional information received. We show using real streaming data that it is possible, using our approach, to debunk rumors accurately and efficiently, often much faster than manual verification by professionals.",
"title": ""
}
] |
[
{
"docid": "e9a66ce7077baf347d325bca7b008d6b",
"text": "Recent research have shown that the Wavelet Transform (WT) can potentially be used to extract Partial Discharge (PD) signals from severe noise like White noise, Random noise and Discrete Spectral Interferences (DSI). It is important to define that noise is a significant problem in PD detection. Accordingly, the paper mainly deals with denoising of PD signals, based on improved WT techniques namely Translation Invariant Wavelet Transform (TIWT). The improved WT method is distinct from other traditional method called as Fast Fourier Transform (FFT). The TIWT not only remain the edge of the original signal efficiently but also reduce impulsive noise to some extent. Additionally Translation Invariant (TI) Wavelet Transform denoising is used to suppress Pseudo Gibbs phenomenon. In this paper an attempt has been made to review the methodology of denoising the partial discharge signals and shows that the proposed denoising method results are better when compared to other wavelet-based approaches like FFT, wavelet hard thresholding, wavelet soft thresholding, by evaluating five different parameters like, Signal to noise ratio, Cross correlation coefficient, Pulse amplitude distortion, Mean square error, Reduction in noise level.",
"title": ""
},
{
"docid": "bacb761bc173a07bf13558e2e5419c2b",
"text": "Rejection sensitivity is the disposition to anxiously expect, readily perceive, and intensely react to rejection. In response to perceived social exclusion, highly rejection sensitive people react with increased hostile feelings toward others and are more likely to show reactive aggression than less rejection sensitive people in the same situation. This paper summarizes work on rejection sensitivity that has provided evidence for the link between anxious expectations of rejection and hostility after rejection. We review evidence that rejection sensitivity functions as a defensive motivational system. Thus, we link rejection sensitivity to attentional and perceptual processes that underlie the processing of social information. A range of experimental and diary studies shows that perceiving rejection triggers hostility and aggressive behavior in rejection sensitive people. We review studies that show that this hostility and reactive aggression can perpetuate a vicious cycle by eliciting rejection from those who rejection sensitive people value most. Finally, we summarize recent work suggesting that this cycle can be interrupted with generalized self-regulatory skills and the experience of positive, supportive relationships.",
"title": ""
},
{
"docid": "6bfc3d00fe6e9fcdb09ad8993b733dfd",
"text": "This article presents the upper-torso design issue of Affeto who can physically interact with humans, which biases the perception of affinity beyond the uncanny valley effect. First, we review the effect and hypothesize that the experience of physical interaction with Affetto decreases the effect. Then, the reality of physical existence is argued with existing platforms. Next, the design concept and a very preliminary experiment are shown. Finally, future issues are given. I. THE UNCANNY VALLEY REVISITED The term “Uncanny” is a translation of Freud’s term “Der Unheimliche” and applied to a phenomenon noted by Masahiro Mori who mentioned that the presence of movement steepens the slopes of the uncanny valley (Figure 2 in [1]). Several studies on this effect can be summarised as follows1. 1) Multimodal impressions such as visual appearance, body motion, sounds (speech and others), and tactile sensation should be congruent to decrease the valley steepness. 2) Antipathetic expressions may exaggerate the valley effect. The current technologies enable us to minimize the gap caused by mismatch among cross-modal factors. Therefore, the valley effect is expected to be reduced gradually. For example, facial expressions and tactile sensations of Affetto [2] are realistic and congruent due to baby-like face skin mask of urethane elastomer gel (See Figure 1). Generated facial expressions almost conquered the uncanny valley. Further, baby-like facial expressions may contribute to the reduction of the valley effect due to 2). In addition to these, we suppose that the motor experience of physical interactions with robots biases the perception of affinity as motor experiences biases the perception of movements [3]. To verify this hypothesis, Affetto needs its body which realizes physical interactions naturally. The rest of this article is organized as follows. The next section argues about the reality of physical existence with existing platforms. Then, the design concept and a very preliminary experiment are shown, and the future issues are given.",
"title": ""
},
{
"docid": "5527521d567290192ea26faeb6e7908c",
"text": "With the rapid development of spectral imaging techniques, classification of hyperspectral images (HSIs) has attracted great attention in various applications such as land survey and resource monitoring in the field of remote sensing. A key challenge in HSI classification is how to explore effective approaches to fully use the spatial–spectral information provided by the data cube. Multiple kernel learning (MKL) has been successfully applied to HSI classification due to its capacity to handle heterogeneous fusion of both spectral and spatial features. This approach can generate an adaptive kernel as an optimally weighted sum of a few fixed kernels to model a nonlinear data structure. In this way, the difficulty of kernel selection and the limitation of a fixed kernel can be alleviated. Various MKL algorithms have been developed in recent years, such as the general MKL, the subspace MKL, the nonlinear MKL, the sparse MKL, and the ensemble MKL. The goal of this paper is to provide a systematic review of MKL methods, which have been applied to HSI classification. We also analyze and evaluate different MKL algorithms and their respective characteristics in different cases of HSI classification cases. Finally, we discuss the future direction and trends of research in this area.",
"title": ""
},
{
"docid": "8c34f43e7d3f760173257fbbc58c22ca",
"text": "High voltage pulse generators can be used effectively in water treatment applications, as applying a pulsed electric field on the infected sample guarantees killing of harmful germs and bacteria. In this paper, a new high voltage pulse generator with closed loop control on its output voltage is proposed. The proposed generator is based on DC-to-DC boost converter in conjunction with capacitor-diode voltage multiplier (CDVM), and can be fed from low-voltage low-frequency AC supply, i.e. utility mains. The proposed topology provides transformer-less operation which reduces size and enhances the overall efficiency. A Detailed design of the proposed pulse generator has been presented as well. The proposed approach is validated by simulation as well as experimental results.",
"title": ""
},
{
"docid": "9b2291ef3e605d85b6d0dba326aa10ef",
"text": "We propose a multi-objective method for avoiding premature convergence in evolutionary algorithms, and demonstrate a three-fold performance improvement over comparable methods. Previous research has shown that partitioning an evolving population into age groups can greatly improve the ability to identify global optima and avoid converging to local optima. Here, we propose that treating age as an explicit optimization criterion can increase performance even further, with fewer algorithm implementation parameters. The proposed method evolves a population on the two-dimensional Pareto front comprising (a) how long the genotype has been in the population (age); and (b) its performance (fitness). We compare this approach with previous approaches on the Symbolic Regression problem, sweeping the problem difficulty over a range of solution complexities and number of variables. Our results indicate that the multi-objective approach identifies the exact target solution more often that the age-layered population and standard population methods. The multi-objective method also performs better on higher complexity problems and higher dimensional datasets -- finding global optima with less computational effort.",
"title": ""
},
{
"docid": "a57b2e8b24cced6f8bfad942dd530499",
"text": "With the tremendous growth of network-based services and sensitive information on networks, network security is getting more and more importance than ever. Intrusion poses a serious security risk in a network environment. The ever growing new intrusion types posses a serious problem for their detection. The human labelling of the available network audit data instances is usually tedious, time consuming and expensive. In this paper, we apply one of the efficient data mining algorithms called naïve bayes for anomaly based network intrusion detection. Experimental results on the KDD cup’99 data set show the novelty of our approach in detecting network intrusion. It is observed that the proposed technique performs better in terms of false positive rate, cost, and computational time when applied to KDD’99 data sets compared to a back propagation neural network based approach.",
"title": ""
},
{
"docid": "72c0cef98023dd5b6c78e9c347798545",
"text": "Several works have shown that Convolutional Neural Networks (CNNs) can be easily adapted to different datasets and tasks. However, for extracting the deep features from these pre-trained deep CNNs a fixedsize (e.g., 227×227) input image is mandatory. Now the state-of-the-art datasets like MIT-67 and SUN-397 come with images of different sizes. Usage of CNNs for these datasets enforces the user to bring different sized images to a fixed size either by reducing or enlarging the images. The curiosity is obvious that “Isn’t the conversion to fixed size image is lossy ?”. In this work, we provide a mechanism to keep these lossy fixed size images aloof and process the images in its original form to get set of varying size deep feature maps, hence being lossless. We also propose deep spatial pyramid match kernel (DSPMK) which amalgamates set of varying size deep feature maps and computes a matching score between the samples. Proposed DSPMK act as a dynamic kernel in the classification framework of scene dataset using support vector machine. We demonstrated the effectiveness of combining the power of varying size CNN-based set of deep feature maps with dynamic kernel by achieving state-of-the-art results for high-level visual recognition tasks such as scene classification on standard datasets like MIT67 and SUN397.",
"title": ""
},
{
"docid": "5da804fa4c1474e27a1c91fcf5682e20",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "5c0f2bcde310b7b76ed2ca282fde9276",
"text": "With the increasing prevalence of Alzheimer's disease, research focuses on the early computer-aided diagnosis of dementia with the goal to understand the disease process, determine risk and preserving factors, and explore preventive therapies. By now, large amounts of data from multi-site studies have been made available for developing, training, and evaluating automated classifiers. Yet, their translation to the clinic remains challenging, in part due to their limited generalizability across different datasets. In this work, we describe a compact classification approach that mitigates overfitting by regularizing the multinomial regression with the mixed ℓ1/ℓ2 norm. We combine volume, thickness, and anatomical shape features from MRI scans to characterize neuroanatomy for the three-class classification of Alzheimer's disease, mild cognitive impairment and healthy controls. We demonstrate high classification accuracy via independent evaluation within the scope of the CADDementia challenge. We, furthermore, demonstrate that variations between source and target datasets can substantially influence classification accuracy. The main contribution of this work addresses this problem by proposing an approach for supervised domain adaptation based on instance weighting. Integration of this method into our classifier allows us to assess different strategies for domain adaptation. Our results demonstrate (i) that training on only the target training set yields better results than the naïve combination (union) of source and target training sets, and (ii) that domain adaptation with instance weighting yields the best classification results, especially if only a small training component of the target dataset is available. These insights imply that successful deployment of systems for computer-aided diagnostics to the clinic depends not only on accurate classifiers that avoid overfitting, but also on a dedicated domain adaptation strategy.",
"title": ""
},
{
"docid": "c8305675ba4bb16f26abf820db4b8a38",
"text": "Microbes are dominant drivers of biogeochemical processes, yet drawing a global picture of functional diversity, microbial community structure, and their ecological determinants remains a grand challenge. We analyzed 7.2 terabases of metagenomic data from 243 Tara Oceans samples from 68 locations in epipelagic and mesopelagic waters across the globe to generate an ocean microbial reference gene catalog with >40 million nonredundant, mostly novel sequences from viruses, prokaryotes, and picoeukaryotes. Using 139 prokaryote-enriched samples, containing >35,000 species, we show vertical stratification with epipelagic community composition mostly driven by temperature rather than other environmental factors or geography. We identify ocean microbial core functionality and reveal that >73% of its abundance is shared with the human gut microbiome despite the physicochemical differences between these two ecosystems.",
"title": ""
},
{
"docid": "29236d00bde843ff06e0f1a3e0ab88e4",
"text": "■ The advent of the modern cruise missile, with reduced radar observables and the capability to fly at low altitudes with accurate navigation, placed an enormous burden on all defense weapon systems. Every element of the engagement process, referred to as the kill chain, from detection to target kill assessment, was affected. While the United States held the low-observabletechnology advantage in the late 1970s, that early lead was quickly challenged by advancements in foreign technology and proliferation of cruise missiles to unfriendly nations. Lincoln Laboratory’s response to the various offense/defense trade-offs has taken the form of two programs, the Air Vehicle Survivability Evaluation program and the Radar Surveillance Technology program. The radar developments produced by these two programs, which became national assets with many notable firsts, is the subject of this article.",
"title": ""
},
{
"docid": "5cdb981566dfd741c9211902c0c59d50",
"text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.",
"title": ""
},
{
"docid": "ac1d1bf198a178cb5655768392c3d224",
"text": "-This paper discusses the two major query evaluation strategies used in large text retrieval systems and analyzes the performance of these strategies. We then discuss several optimization techniques that can be used to reduce evaluation costs and present simulation results to compare the performance of these optimization techniques when evaluating natural language queries with a collection of full text legal materials.",
"title": ""
},
{
"docid": "7167964274b05da06beddb1aef119b2c",
"text": "A great variety of systems in nature, society and technology—from the web of sexual contacts to the Internet, from the nervous system to power grids—can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names—temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology—rather, we want to make papers readable across disciplines.",
"title": ""
},
{
"docid": "71576ab1edd5eadbda1f34baba91b687",
"text": "Visualization can make a wide range of mobile applications more intuitive and productive. The mobility context and technical limitations such as small screen size make it impossible to simply port visualization applications from desktop computers to mobile devices, but researchers are starting to address these challenges. From a purely technical point of view, building more sophisticated mobile visualizations become easier due to new, possibly standard, software APIs such as OpenGLES and increasingly powerful devices. Although ongoing improvements would not eliminate most device limitations or alter the mobility context, they make it easier to create and experiment with alternative approaches.",
"title": ""
},
{
"docid": "1e8f25674dc66a298c277d80dd031c20",
"text": "DeepQ Arrhythmia Database, the first generally available large-scale dataset for arrhythmia detector evaluation, contains 897 annotated single-lead ECG recordings from 299 unique patients. DeepQ includes beat-by-beat, rhythm episodes, and heartbeats fiducial points annotations. Each patient was engaged in a sequence of lying down, sitting, and walking activities during the ECG measurement and contributed three five-minute records to the database. Annotations were manually labeled by a group of certified cardiographic technicians and audited by a cardiologist at Taipei Veteran General Hospital, Taiwan. The aim of this database is in three folds. First, from the scale perspective, we build this database to be the largest representative reference set with greater number of unique patients and more variety of arrhythmic heartbeats. Second, from the diversity perspective, our database contains fully annotated ECG measures from three different activity modes and facilitates the arrhythmia classifier training for wearable ECG patches and AAMI assessment. Thirdly, from the quality point of view, it serves as a complement to the MIT-BIH Arrhythmia Database in the development and evaluation of the arrhythmia detector. The addition of this dataset can help facilitate the exhaustive studies using machine learning models and deep neural networks, and address the inter-patient variability. Further, we describe the development and annotation procedure of this database, as well as our on-going enhancement. We plan to make DeepQ database publicly available to advance medical research in developing outpatient, mobile arrhythmia detectors.",
"title": ""
},
{
"docid": "844116dc8302aac5076c95ac2218b5bd",
"text": "Virtual reality and augmented reality technology has existed in various forms for over two decades. However, high cost proved to be one of the main barriers to its adoption in education, outside of experimental studies. The creation and widespread sale of low-cost virtual reality devices using smart phones has made virtual reality technology available to the common person. This paper reviews how virtual reality and augmented reality has been used in education, discusses the advantages and disadvantages of using these technologies in the classroom, and describes how virtual reality and augmented reality technologies can be used to enhance teaching at the United States Military Academy.",
"title": ""
},
{
"docid": "243391e804c06f8a53af906b31d4b99a",
"text": "As key decisions are often made based on information contained in a database, it is important for the database to be as complete and correct as possible. For this reason, many data cleaning tools have been developed to automatically resolve inconsistencies in databases. However, data cleaning tools provide only best-effort results and usually cannot eradicate all errors that may exist in a database. Even more importantly, existing data cleaning tools do not typically address the problem of determining what information is missing from a database.\n To overcome the limitations of existing data cleaning techniques, we present QOCO, a novel query-oriented system for cleaning data with oracles. Under this framework, incorrect (resp. missing) tuples are removed from (added to) the result of a query through edits that are applied to the underlying database, where the edits are derived by interacting with domain experts which we model as oracle crowds. We show that the problem of determining minimal interactions with oracle crowds to derive database edits for removing (adding) incorrect (missing) tuples to the result of a query is NP-hard in general and present heuristic algorithms that interact with oracle crowds. Finally, we implement our algorithms in our prototype system QOCO and show that it is effective and efficient through a comprehensive suite of experiments.",
"title": ""
},
{
"docid": "9c8648843bfc33f6c66845cd63df94d0",
"text": "BACKGROUND\nThe safety and short-term benefits of laparoscopic colectomy for cancer remain debatable. The multicentre COLOR (COlon cancer Laparoscopic or Open Resection) trial was done to assess the safety and benefit of laparoscopic resection compared with open resection for curative treatment of patients with cancer of the right or left colon.\n\n\nMETHODS\n627 patients were randomly assigned to laparoscopic surgery and 621 patients to open surgery. The primary endpoint was cancer-free survival 3 years after surgery. Secondary outcomes were short-term morbidity and mortality, number of positive resection margins, local recurrence, port-site or wound-site recurrence, metastasis, overall survival, and blood loss during surgery. Analysis was by intention to treat. Here, clinical characteristics, operative findings, and postoperative outcome are reported.\n\n\nFINDINGS\nPatients assigned laparoscopic resection had less blood loss compared with those assigned open resection (median 100 mL [range 0-2700] vs 175 mL [0-2000], p<0.0001), although laparoscopic surgery lasted 30 min longer than did open surgery (p<0.0001). Conversion to open surgery was needed for 91 (17%) patients undergoing the laparoscopic procedure. Radicality of resection as assessed by number of removed lymph nodes and length of resected oral and aboral bowel did not differ between groups. Laparoscopic colectomy was associated with earlier recovery of bowel function (p<0.0001), need for fewer analgesics, and with a shorter hospital stay (p<0.0001) compared with open colectomy. Morbidity and mortality 28 days after colectomy did not differ between groups.\n\n\nINTERPRETATION\nLaparoscopic surgery can be used for safe and radical resection of cancer in the right, left, and sigmoid colon.",
"title": ""
}
] |
scidocsrr
|
31c0c7d30d38abd5a1719505df584dc3
|
SEC-TOE Framework: Exploring Security Determinants in Big Data Solutions Adoption
|
[
{
"docid": "03d5c8627ec09e4332edfa6842b6fe44",
"text": "In the same way businesses use big data to pursue profits, governments use it to promote the public good.",
"title": ""
}
] |
[
{
"docid": "022460b5f9cd5460f4213794455dedd0",
"text": "The meniscus was once considered a functionless remnant of muscle that should be removed in its entirety at any sign of abnormality. Its role in load distribution, knee stability, and arthritis prevention has since been well established. The medial and lateral menisci are now considered vital structures in the healthy knee. Advancements in surgical techniques and biologic augmentation methods have expanded the indications for meniscal repair, with documented healing in tears previously deemed unsalvageable. In this article, we review the anatomy and function of the meniscus, evaluate the implications of meniscectomy, and assess the techniques of, and outcomes following, meniscal repair.",
"title": ""
},
{
"docid": "37f157cdcd27c1647548356a5194f2bc",
"text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.",
"title": ""
},
{
"docid": "1afc103a3878d859ec15929433f49077",
"text": "Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy. For DNNs, the model size is an important factor affecting performance, scalability and energy efficiency. Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning, which affects performance and throughput; 2) the increased training complexity; and 3) the lack of rigirous guarantee of compression ratio and inference accuracy.\n To overcome these limitations, this paper proposes CirCNN, a principled approach to represent weights and process neural networks using block-circulant matrices. CirCNN utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) from O(n2) to O(n log n) and the storage complexity from O(n2) to O(n), with negligible accuracy loss. Compared to other approaches, CirCNN is distinct due to its mathematical rigor: the DNNs based on CirCNN can converge to the same \"effectiveness\" as DNNs without compression. We propose the CirCNN architecture, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc.). In CirCNN architecture: 1) Due to the recursive property, FFT can be used as the key computing kernel, which ensures universal and small-footprint implementations. 2) The compressed but regular network structure avoids the pitfalls of the network pruning and facilitates high performance and throughput with highly pipelined and parallel design. To demonstrate the performance and energy efficiency, we test CirCNN in FPGA, ASIC and embedded processors. Our results show that CirCNN architecture achieves very high energy efficiency and performance with a small hardware footprint. Based on the FPGA implementation and ASIC synthesis results, CirCNN achieves 6 - 102X energy efficiency improvements compared with the best state-of-the-art results.",
"title": ""
},
{
"docid": "81aa85ced7f0d83e28b0a2616bce6aae",
"text": "Delaunay refinement is a technique for generating unstructured meshes of triangles for use in interpolation, the finite element method, and the finite volume method. In theory and practice, meshes produced by Delaunay refinement satisfy guaranteed bounds on angles, edge lengths, the number of triangles, and the grading of triangles from small to large sizes. This article presents an intuitive framework for analyzing Delaunay refinement algorithms that unifies the pioneering mesh generation algorithms of L. Paul Chew and Jim Ruppert, improves the algorithms in several minor ways, and most importantly, helps to solve the difficult problem of meshing nonmanifold domains with small angles. Although small angles inherent in the input geometry cannot be removed, one would like to triangulate a domain without creating any new small angles. Unfortunately, this problem is not always soluble. A compromise is necessary. A Delaunay refinement algorithm is presented that can create a mesh in which most angles are or greater and no angle is smaller than \"!# , where %$'& is the smallest angle separating two segments of the input domain. New angles smaller than appear only near input angles smaller than & ( . In practice, the algorithm’s performance is better than these bounds suggest. Another new result is that Ruppert’s analysis technique can be used to reanalyze one of Chew’s algorithms. Chew proved that his algorithm produces no angle smaller than ) (barring small input angles), but without any guarantees on grading or number of triangles. He conjectures that his algorithm offers such guarantees. His conjecture is conditionally confirmed here: if the angle bound is relaxed to less than &+*-, , Chew’s algorithm produces meshes (of domains without small input angles) that are nicely graded and size-optimal.",
"title": ""
},
{
"docid": "dd1fd4f509e385ea8086a45a4379a8b5",
"text": "As we move towards large-scale object detection, it is unrealistic to expect annotated training data for all object classes at sufficient scale, and so methods capable of unseen object detection are required. We propose a novel zero-shot method based on training an end-to-end model that fuses semantic attribute prediction with visual features to propose object bounding boxes for seen and unseen classes. While we utilize semantic features during training, our method is agnostic to semantic information for unseen classes at test-time. Our method retains the efficiency and effectiveness of YOLO [1] for objects seen during training, while improving its performance for novel and unseen objects. The ability of state-of-art detection methods to learn discriminative object features to reject background proposals also limits their performance for unseen objects. We posit that, to detect unseen objects, we must incorporate semantic information into the visual domain so that the learned visual features reflect this information and leads to improved recall rates for unseen objects. We test our method on PASCAL VOC and MS COCO dataset and observed significant improvements on the average precision of unseen classes.",
"title": ""
},
{
"docid": "4d69284c25e1a9a503dd1c12fde23faa",
"text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.",
"title": ""
},
{
"docid": "68f8d261308714abd7e2655edd66d18a",
"text": "In this paper, we present a solution to Moments in Time (MIT) [1] Challenge. Current methods for trimmed video recognition often utilize inflated 3D (I3D) [2] to capture spatial-temporal features. First, we explore off-the-shelf structures like non-local [3], I3D, TRN [4] and their variants. After a plenty of experiments, we find that for MIT, a strong 2D convolution backbone following temporal relation network performs better than I3D network. We then add attention module based on TRN to learn a weight for each relation so that the model can capture the important moment better. We also design uniform sampling over videos and relation restriction policy to further enhance testing performance.",
"title": ""
},
{
"docid": "8cd701723c72b16dfe7d321cb657ee31",
"text": "A coupled-inductor double-boost inverter (CIDBI) is proposed for microinverter photovoltaic (PV) module system, and the control strategy applied to it is analyzed. Also, the operation principle of the proposed inverter is discussed and the gain from dc to ac is deduced in detail. The main attribute of the CIDBI topology is the fact that it generates an ac output voltage larger than the dc input one, depending on the instantaneous duty cycle and turns ratio of the coupled inductor as well. This paper points out that the gain is proportional to the duty cycle approximately when the duty cycle is around 0.5 and the synchronized pulsewidth modulation can be applicable to this novel inverter. Finally, the proposed inverter servers as a grid inverter in the grid-connected PV system and the experimental results show that the CIDBI can implement the single-stage PV-grid-connected power generation competently and be of small volume and high efficiency by leaving out the transformer or the additional dc-dc converter.",
"title": ""
},
{
"docid": "cf0a52fb8b55cf253f560aa8db35717a",
"text": "Big Data though it is a hype up-springing many technical challenges that confront both academic research communities and commercial IT deployment, the root sources of Big Data are founded on data streams and the curse of dimensionality. It is generally known that data which are sourced from data streams accumulate continuously making traditional batch-based model induction algorithms infeasible for real-time data mining. Feature selection has been popularly used to lighten the processing load in inducing a data mining model. However, when it comes to mining over high dimensional data the search space from which an optimal feature subset is derived grows exponentially in size, leading to an intractable demand in computation. In order to tackle this problem which is mainly based on the high-dimensionality and streaming format of data feeds in Big Data, a novel lightweight feature selection is proposed. The feature selection is designed particularly for mining streaming data on the fly, by using accelerated particle swarm optimization (APSO) type of swarm search that achieves enhanced analytical accuracy within reasonable processing time. In this paper, a collection of Big Data with exceptionally large degree of dimensionality are put under test of our new feature selection algorithm for performance evaluation.",
"title": ""
},
{
"docid": "a42ca90e38f8fcdea60df967c7ca8ecd",
"text": "DDoS defense today relies on expensive and proprietary hardware appliances deployed at fixed locations. This introduces key limitations with respect to flexibility (e.g., complex routing to get traffic to these “chokepoints”) and elasticity in handling changing attack patterns. We observe an opportunity to address these limitations using new networking paradigms such as softwaredefined networking (SDN) and network functions virtualization (NFV). Based on this observation, we design and implement Bohatei, a flexible and elastic DDoS defense system. In designing Bohatei, we address key challenges with respect to scalability, responsiveness, and adversary-resilience. We have implemented defenses for several DDoS attacks using Bohatei. Our evaluations show that Bohatei is scalable (handling 500 Gbps attacks), responsive (mitigating attacks within one minute), and resilient to dynamic adversaries.",
"title": ""
},
{
"docid": "12267eb671d0b7b12f04e8b04637f0b6",
"text": "Monopulse antennas can be used for accurate and rapid angle estimation in radar systems [1]. This paper presents a new kind of monopulse antenna base on two-dimensional elliptical lens. As an example, a patch-fed elliptical lens antenna is designed at 35 GHz. Simulations show the designed lens antenna exhibits clean and symmetrical patterns on both sum and difference ports. A very deep null is achieved in the difference pattern because of the circuit symmetry.",
"title": ""
},
{
"docid": "5eb4ba54e8f1288c8fa9222d664704b1",
"text": "Common Information Model (CIM) is widely adopted by many utilities since it offers interoperability through standard information models. Storing, processing, retrieving, and providing concurrent access of the large power network models to the various power system applications in CIM framework are the current challenges faced by utility operators. As the power network models resemble largely connected-data sets, the design of CIM oriented database has to support high-speed data retrieval of the connected-data and efficient storage for processing. The graph database is gaining wide acceptance for storing and processing of largely connected-data for various applications. This paper presents a design of CIM oriented graph database (CIMGDB) for storing and processing the largely connected-data of power system applications. Three significant advantages of the CIMGDB are efficient data retrieval and storage, agility to adapt dynamic changes in CIM profile, and greater flexibility of modeling CIM unified modeling language (UML) in GDB. The CIMGDB does not need a predefined database schema. Therefore, the CIM semantics needs to be added to the artifacts of GDB for every instance of CIM objects storage. A CIM based object-graph mapping methodology is proposed to automate the process. An integration of CIMGDB and power system applications is discussed by an implementation architecture. The data-intensive network topology processing (NTP) is implemented, and demonstrated for six IEEE test networks and one practical 400 kV Maharashtra network. Results such as computation time of executing network topology processing evaluate the performance of the CIMGDB.",
"title": ""
},
{
"docid": "99cd180d0bb08e6360328b77219919c1",
"text": "In this paper, we describe our approach to RecSys 2015 challenge problem. Given a dataset of item click sessions, the problem is to predict whether a session results in a purchase and which items are purchased if the answer is yes.\n We define a simpler analogous problem where given an item and its session, we try to predict the probability of purchase for the given item. For each session, the predictions result in a set of purchased items or often an empty set.\n We apply monthly time windows over the dataset. For each item in a session, we engineer features regarding the session, the item properties, and the time window. Then, a balanced random forest classifier is trained to perform predictions on the test set.\n The dataset is particularly challenging due to privacy-preserving definition of a session, the class imbalance problem, and the volume of data. We report our findings with respect to feature engineering, the choice of sampling schemes, and classifier ensembles. Experimental results together with benefits and shortcomings of the proposed approach are discussed. The solution is efficient and practical in commodity computers.",
"title": ""
},
{
"docid": "a6acba54f34d1d101f4abb00f4fe4675",
"text": "We study the potential flow of information in interaction networks, that is, networks in which the interactions between the nodes are being recorded. The central notion in our study is that of an information channel. An information channel is a sequence of interactions between nodes forming a path in the network which respects the time order. As such, an information channel represents a potential way information could have flown in the interaction network. We propose algorithms to estimate information channels of limited time span from every node to other nodes in the network. We present one exact and one more efficient approximate algorithm. Both algorithms are onepass algorithms. The approximation algorithm is based on an adaptation of the HyperLogLog sketch, which allows easily combining the sketches of individual nodes in order to get estimates of how many unique nodes can be reached from groups of nodes as well. We show how the results of our algorithm can be used to build efficient influence oracles for solving the Influence maximization problem which deals with finding top k seed nodes such that the information spread from these nodes is maximized. Experiments show that the use of information channels is an interesting data-driven and model-independent way to find top k influential nodes in interaction networks.",
"title": ""
},
{
"docid": "74fb666c47afc81b8e080f730e0d1fe0",
"text": "In current commercial Web search engines, queries are processed in the conjunctive mode, which requires the search engine to compute the intersection of a number of posting lists to determine the documents matching all query terms. In practice, the intersection operation takes a significant fraction of the query processing time, for some queries dominating the total query latency. Hence, efficient posting list intersection is critical for achieving short query latencies. In this work, we focus on improving the performance of posting list intersection by leveraging the compute capabilities of recent multicore systems. To this end, we consider various coarse-grained and fine-grained parallelization models for list intersection. Specifically, we present an algorithm that partitions the work associated with a given query into a number of small and independent tasks that are subsequently processed in parallel. Through a detailed empirical analysis of these alternative models, we demonstrate that exploiting parallelism at the finest-level of granularity is critical to achieve the best performance on multicore systems. On an eight-core system, the fine-grained parallelization method is able to achieve more than five times reduction in average query processing time while still exploiting the parallelism for high query throughput.",
"title": ""
},
{
"docid": "c16499b3945603d04cf88fec7a2c0a85",
"text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.",
"title": ""
},
{
"docid": "b740fd9a56701ddd8c54d92f45895069",
"text": "In vivo imaging of apoptosis in a preclinical setting in anticancer drug development could provide remarkable advantages in terms of translational medicine. So far, several imaging technologies with different probes have been used to achieve this goal. Here we describe a bioluminescence imaging approach that uses a new formulation of Z-DEVD-aminoluciferin, a caspase 3/7 substrate, to monitor in vivo apoptosis in tumor cells engineered to express luciferase. Upon apoptosis induction, Z-DEVD-aminoluciferin is cleaved by caspase 3/7 releasing aminoluciferin that is now free to react with luciferase generating measurable light. Thus, the activation of caspase 3/7 can be measured by quantifying the bioluminescent signal. Using this approach, we have been able to monitor caspase-3 activation and subsequent apoptosis induction after camptothecin and temozolomide treatment on xenograft mouse models of colon cancer and glioblastoma, respectively. Treated mice showed more than 2-fold induction of Z-DEVD-aminoluciferin luminescent signal when compared to the untreated group. Combining D-luciferin that measures the total tumor burden, with Z-DEVD-aminoluciferin that assesses apoptosis induction via caspase activation, we confirmed that it is possible to follow non-invasively tumor growth inhibition and induction of apoptosis after treatment in the same animal over time. Moreover, here we have proved that following early apoptosis induction by caspase 3 activation is a good biomarker that accurately predicts tumor growth inhibition by anti-cancer drugs in engineered colon cancer and glioblastoma cell lines and in their respective mouse xenograft models.",
"title": ""
},
{
"docid": "748ae7abfd8b1dfb3e79c94c5adace9d",
"text": "Users routinely access cloud services through third-party apps on smartphones by giving apps login credentials (i.e., a username and password). Unfortunately, users have no assurance that their apps will properly handle this sensitive information. In this paper, we describe the design and implementation of ScreenPass, which significantly improves the security of passwords on touchscreen devices. ScreenPass secures passwords by ensuring that they are entered securely, and uses taint-tracking to monitor where apps send password data. The primary technical challenge addressed by ScreenPass is guaranteeing that trusted code is always aware of when a user is entering a password. ScreenPass provides this guarantee through two techniques. First, ScreenPass includes a trusted software keyboard that encourages users to specify their passwords' domains as they are entered (i.e., to tag their passwords). Second, ScreenPass performs optical character recognition (OCR) on a device's screenbuffer to ensure that passwords are entered only through the trusted software keyboard. We have evaluated ScreenPass through experiments with a prototype implementation, two in-situ user studies, and a small app study. Our prototype detected a wide range of dynamic and static keyboard-spoofing attacks and generated zero false positives. As long as a screen is off, not updated, or not tapped, our prototype consumes zero additional energy; in the worst case, when a highly interactive app rapidly updates the screen, our prototype under a typical configuration introduces only 12% energy overhead. Participants in our user studies tagged their passwords at a high rate and reported that tagging imposed no additional burden. Finally, a study of malicious and non-malicious apps running under ScreenPass revealed several cases of password mishandling.",
"title": ""
},
{
"docid": "467f7ac9d8f52b9b82257e736910fab6",
"text": "The manual assessment of activities of daily living (ADLs) is a fundamental problem in elderly care. The use of miniature sensors placed in the environment or worn by a person has great potential in effective and unobtrusive long term monitoring and recognition of ADLs. This paper presents an effective and unobtrusive activity recognition system based on the combination of the data from two different types of sensors: RFID tag readers and accelerometers. We evaluate our algorithms on non-scripted datasets of 10 housekeeping activities performed by 12 subjects. The experimental results show that recognition accuracy can be significantly improved by fusing the two different types of sensors. We analyze different acceleration features and algorithms, and based on tag detections we suggest the best tagspsila placements and the key objects to be tagged for each activity.",
"title": ""
},
{
"docid": "af5a2ad28ab61015c0344bf2e29fe6a7",
"text": "Recent years have shown that more than ever governments and intelligence agencies try to control and bypass the cryptographic means used for the protection of data. Backdooring encryption algorithms is considered as the best way to enforce cryptographic control. Until now, only implementation backdoors (at the protocol/implementation/management level) are generally considered. In this paper we propose to address the most critical issue of backdoors: mathematical backdoors or by-design backdoors, which are put directly at the mathematical design of the encryption algorithm. While the algorithm may be totally public, proving that there is a backdoor, identifying it and exploiting it, may be an intractable problem. We intend to explain that it is probably possible to design and put such backdoors. Considering a particular family (among all the possible ones), we present BEA-1, a block cipher algorithm which is similar to the AES and which contains a mathematical backdoor enabling an operational and effective cryptanalysis. The BEA-1 algorithm (80-bit block size, 120-bit key, 11 rounds) is designed to resist to linear and differential cryptanalyses. A challenge will be proposed to the cryptography community soon. Its aim is to assess whether our backdoor is easily detectable and exploitable or not.",
"title": ""
}
] |
scidocsrr
|
9632143dff7a9b0ff776d5ce7a1d8b4f
|
Acing the IOC Game: Toward Automatic Discovery and Analysis of Open-Source Cyber Threat Intelligence
|
[
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
}
] |
[
{
"docid": "deba3a2c56f32f15aa0b41e9ff16d2e3",
"text": "This article reviews what is currently known about how men and women respond to the presentation of visual sexual stimuli. While the assumption that men respond more to visual sexual stimuli is generally empirically supported, previous reports of sex differences are confounded by the variable content of the stimuli presented and measurement techniques. We propose that the cognitive processing stage of responding to sexual stimuli is the first stage in which sex differences occur. The divergence between men and women is proposed to occur at this time, reflected in differences in neural activation, and contribute to previously reported sex differences in downstream peripheral physiological responses and subjective reports of sexual arousal. Additionally, this review discusses factors that may contribute to the variability in sex differences observed in response to visual sexual stimuli. Factors include participant variables, such as hormonal state and socialized sexual attitudes, as well as variables specific to the content presented in the stimuli. Based on the literature reviewed, we conclude that content characteristics may differentially produce higher levels of sexual arousal in men and women. Specifically, men appear more influenced by the sex of the actors depicted in the stimuli while women's response may differ with the context presented. Sexual motivation, perceived gender role expectations, and sexual attitudes are possible influences. These differences are of practical importance to future research on sexual arousal that aims to use experimental stimuli comparably appealing to men and women and also for general understanding of cognitive sex differences.",
"title": ""
},
{
"docid": "06e3d228e9fac29dab7180e56f087b45",
"text": "Curiosity is thought to be an intrinsically motivated driving force for seeking information. Thus, the opportunity for an information gain (IG) should instil curiosity in humans and result in information gathering actions. To investigate if, and how, information acts as an intrinsic reward, a search task was set in a context of blurred background images which could be revealed by iterative clicking. The search task was designed such that it prevented efficient IG about the underlying images. Participants therefore had to trade between clicking regions with high search target probability or high expected image content information. Image content IG was established from “information-maps” based on participants exploration with the intention of understanding (1) the main theme of the image and (2) how interesting the image might appear to others. Note that IG is in this thesis not identical with the information theoretic concept of information gain, the quantities are however probably related. It was hypothesised that participants would be distracted by visually informative regions and that images independently rated as more interesting would yield higher image based IG. It was also hypothesised that image based IG would increase as a function of time. Results show that participants sometimes explored images driven by curiosity, and that there was considerable individual variation in which images participants were curious about. Independent interest ratings did not account for image based IG. The level of IG increased over trials, interestingly without affecting participants’ performance on the visual search task designed to prevent IG. Results support that IG is rewarding as participants learned to optimize IG over trials without compromising performance on the extrinsically motivated search; managing to both keep the cake and eat it.",
"title": ""
},
{
"docid": "853703c46af2dda7735e7783b56cba44",
"text": "PURPOSE\nWe compared the efficacy and safety of sodium hyaluronate (SH) and carboxymethylcellulose (CMC) in treating mild to moderate dry eye.\n\n\nMETHODS\nSixty-seven patients with mild to moderate dry eye were enrolled in this prospective, randomized, blinded study. They were treated 6 times a day with preservative-free unit dose formula eyedrops containing 0.1% SH or 0.5% CMC for 8 weeks. Corneal and conjunctival staining with fluorescein, tear film breakup time, subjective symptoms, and adverse reactions were assessed at baseline, 4 weeks, and 8 weeks after treatment initiation.\n\n\nRESULTS\nThirty-two patients were randomly assigned to the SH group and 33 were randomly assigned to the CMC group. Both the SH and CMC groups showed statistically significant improvements in corneal and conjunctival staining sum scores, tear film breakup time, and dry eye symptom score at 4 and 8 weeks after treatment initiation. However, there were no statistically significant differences in any of the indices between the 2 treatment groups. There were no significant adverse reactions observed during follow-up.\n\n\nCONCLUSIONS\nThe efficacies of SH and CMC were equivalent in treating mild to moderate dry eye. SH and CMC preservative-free artificial tear formulations appropriately manage dry eye sign and symptoms and show safety and efficacy when frequently administered in a unit dose formula.",
"title": ""
},
{
"docid": "8f70026ff59ed1ae54ab5b6dadd2a3da",
"text": "Exoskeleton suit is a kind of human-machine robot, which combines the humans intelligence with the powerful energy of mechanism. It can help people to carry heavy load, walking on kinds of terrains and have a broadly apply area. Though many exoskeleton suits has been developed, there need many complex sensors between the pilot and the exoskeleton system, which decrease the comfort of the pilot. Sensitivity amplification control (SAC) is a method applied in exoskeleton system without any sensors between the pilot and the exoskeleton. In this paper simulation research was made to verify the feasibility of SAC include a simple 1-dof model and a swing phase model of 3-dof. A PID controller was taken to describe the human-machine interface model. Simulation results show the human only need to exert a scale-down version torque compared with the actuator and decrease the power consumes of the pilot.",
"title": ""
},
{
"docid": "2b00c07248c468447e12aff67c52a192",
"text": "Video fluoroscopy is commonly used in the study of swallowing kinematics. However, various procedures used in linear measurements obtained from video fluoroscopy may contribute to increased variability or measurement error. This study evaluated the influence of calibration referent and image rotation on measurement variability for hyoid and laryngeal displacement during swallowing. Inter- and intrarater reliabilities were also estimated for hyoid and laryngeal displacement measurements across conditions. The use of different calibration referents did not contribute significantly to variability in measures of hyoid and laryngeal displacement but image rotation affected horizontal measures for both structures. Inter- and intrarater reliabilities were high. Using the 95% confidence interval as the error index, measurement error was estimated to range from 2.48 to 3.06 mm. These results address procedural decisions for measuring hyoid and laryngeal displacement in video fluoroscopic swallowing studies.",
"title": ""
},
{
"docid": "296120e8ac6a03c8079fe343058f26ff",
"text": "OBJECTIVE\nDegenerative ataxias in children present a rare condition where effective treatments are lacking. Intensive coordinative training based on physiotherapeutic exercises improves degenerative ataxia in adults, but such exercises have drawbacks for children, often including a lack of motivation for high-frequent physiotherapy. Recently developed whole-body controlled video game technology might present a novel treatment strategy for highly interactive and motivational coordinative training for children with degenerative ataxias.\n\n\nMETHODS\nWe examined the effectiveness of an 8-week coordinative training for 10 children with progressive spinocerebellar ataxia. Training was based on 3 Microsoft Xbox Kinect video games particularly suitable to exercise whole-body coordination and dynamic balance. Training was started with a laboratory-based 2-week training phase and followed by 6 weeks training in children's home environment. Rater-blinded assessments were performed 2 weeks before laboratory-based training, immediately prior to and after the laboratory-based training period, as well as after home training. These assessments allowed for an intraindividual control design, where performance changes with and without training were compared.\n\n\nRESULTS\nAtaxia symptoms were significantly reduced (decrease in Scale for the Assessment and Rating of Ataxia score, p = 0.0078) and balance capacities improved (dynamic gait index, p = 0.04) after intervention. Quantitative movement analysis revealed improvements in gait (lateral sway: p = 0.01; step length variability: p = 0.01) and in goal-directed leg placement (p = 0.03).\n\n\nCONCLUSIONS\nDespite progressive cerebellar degeneration, children are able to improve motor performance by intensive coordination training. Directed training of whole-body controlled video games might present a highly motivational, cost-efficient, and home-based rehabilitation strategy to train dynamic balance and interaction with dynamic environments in a large variety of young-onset neurologic conditions.\n\n\nCLASSIFICATION OF EVIDENCE\nThis study provides Class III evidence that directed training with Xbox Kinect video games can improve several signs of ataxia in adolescents with progressive ataxia as measured by SARA score, Dynamic Gait Index, and Activity-specific Balance Confidence Scale at 8 weeks of training.",
"title": ""
},
{
"docid": "b8de76afab03ad223fb4713b214e3fec",
"text": "Companies facing new requirements for governance are scrambling to buttress financial-reporting systems, overhaul board structures--whatever it takes to comply. But there are limits to how much good governance can be imposed from the outside. Boards know what they ought to be: seats of challenge and inquiry that add value without meddling and make CEOs more effective but not all-powerful. A board can reach that goal only if it functions as a high-performance team, one that is competent, coordinated, collegial, and focused on an unambiguous goal. Such entities don't just evolve; they must be constructed to an exacting blueprint--what the author calls board building. In this article, Nadler offers an agenda and a set of tools that boards can use to define and achieve their objectives. It's important for a board to conduct regular self-assessments and to pay attention to the results of those analyses. As a first step, the directors and the CEO should agree on which of the following common board models best fits the company: passive, certifying, engaged, intervening, or operating. The directors and the CEO should then analyze which business tasks are most important and allot sufficient time and resources to them. Next, the board should take inventory of each director's strengths to ensure that the group as a whole possesses the skills necessary to do its work. Directors must exert more influence over meeting agendas and make sure they have the right information at the right time and in the right format to perform their duties. Finally, the board needs to foster an engaged culture characterized by candor and a willingness to challenge. An ambitious board-building process, devised and endorsed both by directors and by management, can potentially turn a good board into a great one.",
"title": ""
},
{
"docid": "d6587e4d37742c25355296da3a718c41",
"text": "Vehicular Ad hoc Networks (VANETs) are classified as an application of Mobile Ad-hoc Networks (MANETs) that has the potential in improving road safety and providing Intelligent Transportation System (ITS). Vehicular communication system facilitates communication devices for exchange of information among vehicles and vehicles and Road Side Units (RSUs).The era of vehicular adhoc networks is now gaining attention and momentum. Researchers and developers have built VANET simulation tools to allow the study and evaluation of various routing protocols, various emergency warning protocols and others VANET applications. Simulation of VANET routing protocols and its applications is fundamentally different from MANETs simulation because in VANETs, vehicular environment impose new issues and requirements, such as multi-path fading, roadside obstacles, trip models, traffic flow models, traffic lights, traffic congestion, vehicular speed and mobility, drivers behaviour etc. This paper presents a comparative study of various publicly available VANET simulation tools. Currently, there are network simulators, VANET mobility generators and VANET simulators are publicly available. In particular, this paper contrast their software characteristics, graphical user interface, accuracy of simulation, ease of use, popularity, input requirements, output visualization capabilities etc. Keywords-Ad-hoc network, ITS (Intelligent Transportation System), MANET, Simulation, VANET.",
"title": ""
},
{
"docid": "0a4f5a46948310cfce44a8749cd479df",
"text": "This paper presents a tutorial introduction to contemporary cryptography. The basic information theoretic and computational properties of classical and modern cryptographic systems are presented, followed by cryptanalytic examination of several important systems and an examination of the application of cryptography to the security of timesharing systems and computer networks. The paper concludes with a guide to the cryptographic literature.",
"title": ""
},
{
"docid": "be017adea5e5c5f183fd35ac2ff6b614",
"text": "In nationally representative yearly surveys of United States 8th, 10th, and 12th graders 1991-2016 (N = 1.1 million), psychological well-being (measured by self-esteem, life satisfaction, and happiness) suddenly decreased after 2012. Adolescents who spent more time on electronic communication and screens (e.g., social media, the Internet, texting, gaming) and less time on nonscreen activities (e.g., in-person social interaction, sports/exercise, homework, attending religious services) had lower psychological well-being. Adolescents spending a small amount of time on electronic communication were the happiest. Psychological well-being was lower in years when adolescents spent more time on screens and higher in years when they spent more time on nonscreen activities, with changes in activities generally preceding declines in well-being. Cyclical economic indicators such as unemployment were not significantly correlated with well-being, suggesting that the Great Recession was not the cause of the decrease in psychological well-being, which may instead be at least partially due to the rapid adoption of smartphones and the subsequent shift in adolescents' time use. (PsycINFO Database Record",
"title": ""
},
{
"docid": "dffe5305558e10a0ceba499f3a01f4d8",
"text": "A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or non-linear transformation, PMvGE can treat both simultaneously. By combining Mercer’s theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihoodbased estimator enables efficient computation of non-linear transformations of data vectors in largescale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.",
"title": ""
},
{
"docid": "1a9086eb63bffa5a36fde268fb74c7a6",
"text": "This brief presents a simple reference circuit with channel-length modulation compensation to generate a reference voltage of 221 mV using subthreshold of MOSFETs at supply voltage of 0.85 V with power consumption of 3.3 muW at room temperature using TSMC 0.18-mum technology. The proposed circuit occupied in less than 0.0238 mm 2 achieves the reference voltage variation of 2 mV/V for supply voltage from 0.9 to 2.5V and about 6 mV of temperature variation in the range from -20degC to 120 degC. The agreement of simulation and measurement data is demonstrated",
"title": ""
},
{
"docid": "226582e50ef3e91b8325b140efea6a8e",
"text": "This special issue focuses on the theme of sensory processing dysfunction in schizophrenia. For more than 50 years, from approximately the time of Bleuler until the early 1960s, sensory function was considered one of the few preserved functions in schizophrenia (Javitt1). Fortunately, the last several decades have brought a renewed and accelerating interest in this topic. The articles included in the issue range from those addressing fundamental bases of sensory dysfunction (Brenner, Yoon, and Turetsky) to those that examine how elementary deficits in sensory processing affect the sensory experience of individuals with schizophrenia (Butler, Kantrowitz, and Coleman) to the question of how sensory-based treatments may lead to improvement in remediation strategies (Adcock). Although addressing only a small portion of the current complex and burgeoning literature on sensory impairments across modalities, the present articles provide a cross-section of the issues currently under investigation. These studies also underscore the severe challenges that individuals with schizophrenia face when trying to decode the complex world around them.",
"title": ""
},
{
"docid": "76a7b28b225781bc15b887569cd3181b",
"text": "Mangroves are defined by the presence of trees that mainly occur in the intertidal zone, between land and sea, in the (sub) tropics. The intertidal zone is characterised by highly variable environmental factors, such as temperature, sedimentation and tidal currents. The aerial roots of mangroves partly stabilise this environment and provide a substratum on which many species of plants and animals live. Above the water, the mangrove trees and canopy provide important habitat for a wide range of species. These include birds, insects, mammals and reptiles. Below the water, the mangrove roots are overgrown by epibionts such as tunicates, sponges, algae, and bivalves. The soft substratum in the mangroves forms habitat for various infaunal and epifaunal species, while the space between roots provides shelter and food for motile fauna such as prawns, crabs and fishes. Mangrove litter is transformed into detritus, which partly supports the mangrove food web. Plankton, epiphytic algae and microphytobenthos also form an important basis for the mangrove food web. Due to the high abundance of food and shelter, and low predation pressure, mangroves form an ideal habitat for a variety of animal species, during part or all of their life cycles. As such, mangroves may function as nursery habitats for (commercially important) crab, prawn and fish species, and support offshore fish populations and fisheries. Evidence for linkages between mangroves and offshore habitats by animal migrations is still scarce, but highly needed for management and conservation purposes. Here, we firstly reviewed the habitat function of mangroves by common taxa of terrestrial and marine animals. Secondly, we reviewed the literature with regard to the degree of interlinkage between mangroves and adjacent habitats, a research area which has received increasing attention in the last decade. Finally, we reviewed current insights into the degree to which mangrove litter fuels the mangrove food web, since this has been the subject of longstanding debate. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c35608f769b7844adc482ff9f7a79278",
"text": "Video annotation is an effective way to facilitate content-based analysis for videos. Automatic machine learning methods are commonly used to accomplish this task. Among these, active learning is one of the most effective methods, especially when the training data cost a great deal to obtain. One of the most challenging problems in active learning is the sample selection. Various sampling strategies can be used, such as uncertainty, density, and diversity, but it is difficult to strike a balance among them. In this paper, we provide a visualization-based batch mode sampling method to handle such a problem. An iso-contour-based scatterplot is used to provide intuitive clues for the representativeness and informativeness of samples and assist users in sample selection. A semisupervised metric learning method is incorporated to help generate an effective scatterplot reflecting the high-level semantic similarity for visual sample selection. Moreover, both quantitative and qualitative evaluations are provided to show that the visualization-based method can effectively enhance sample selection in active learning.",
"title": ""
},
{
"docid": "5bdbf3fa515da2c49c99740f3f6b420e",
"text": "Bearing failure is one of the foremost causes of breakdowns in rotating machinery and such failure can be catastrophic, resulting in costly downtime. One of the key issues in bearing prognostics is to detect the defect at its incipient stage and alert the operator before it develops into a catastrophic failure. Signal de-noising and extraction of the weak signature are crucial to bearing prognostics since the inherent deficiency of the measuring mechanism often introduces a great amount of noise to the signal. In addition, the signature of a defective bearing is spread across a wide frequency band and hence can easily become masked by noise and low frequency effects. As a result, robust methods are needed to provide more evident information for bearing performance assessment and prognostics. This paper introduces enhanced and robust prognostic methods for rolling element bearing including a wavelet filter based method for weak signature enhancement for fault identification and Self Organizing Map (SOM) based method for performance degradation assessment. The experimental results demonstrate that the bearing defects can be detected at an early stage of development when both optimal wavelet filter and SOM method are used. q 2004 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "638373cda30d5f08976a5d796283ed3e",
"text": "A coax-feed wideband dual-polarized patch antenna with low cross polarization and high port isolation is presented in this letter. The proposed antenna contains two pairs of T-shaped slots on the two bowtie-shaped patches separately. This structure changes the path of the current and keeps the cross polarization under -40 dB. By introducing two short pins, the isolation between the two ports remains more than 38 dB in the whole bandwidth with the front-to-back ratio better than 19 dB. Moreover, the proposed antenna achieving a 10-dB return loss bandwidth of 1.70-2.73 GHz has a compact structure, thus making it easy to be extended to form an array, which can be used as a base station antenna for PCS, UMTS, and WLAN/WiMAX applications.",
"title": ""
},
{
"docid": "a2d38448513e69f514f88eb852e76292",
"text": "It is cost-efficient for a tenant with a limited budget to establish a virtual MapReduce cluster by renting multiple virtual private servers (VPSs) from a VPS provider. To provide an appropriate scheduling scheme for this type of computing environment, we propose in this paper a hybrid job-driven scheduling scheme (JoSS for short) from a tenant's perspective. JoSS provides not only job-level scheduling, but also map-task level scheduling and reduce-task level scheduling. JoSS classifies MapReduce jobs based on job scale and job type and designs an appropriate scheduling policy to schedule each class of jobs. The goal is to improve data locality for both map tasks and reduce tasks, avoid job starvation, and improve job execution performance. Two variations of JoSS are further introduced to separately achieve a better map-data locality and a faster task assignment. We conduct extensive experiments to evaluate and compare the two variations with current scheduling algorithms supported by Hadoop. The results show that the two variations outperform the other tested algorithms in terms of map-data locality, reduce-data locality, and network overhead without incurring significant overhead. In addition, the two variations are separately suitable for different MapReduce-workload scenarios and provide the best job performance among all tested algorithms.",
"title": ""
},
{
"docid": "f590eac54deff0c65732cf9922db3b93",
"text": "Lichen planus (LP) is a common chronic inflammatory condition that can affect skin and mucous membranes, including the oral mucosa. Because of the anatomic, physiologic and functional peculiarities of the oral cavity, the oral variant of LP (OLP) requires specific evaluations in terms of diagnosis and management. In this comprehensive review, we discuss the current developments in the understanding of the etiopathogenesis, clinical-pathologic presentation, and treatment of OLP, and provide follow-up recommendations informed by recent data on the malignant potential of the disease as well as health economics evaluations.",
"title": ""
},
{
"docid": "0c509f98c65a48c31d32c0c510b4c13f",
"text": "An EM based straight forward design and pattern synthesis technique for series fed microstrip patch array antennas is proposed. An optimization of each antenna element (λ/4-transmission line, λ/2-patch, λ/4-transmission line) of the array is performed separately. By introducing an equivalent circuit along with an EM parameter extraction method, each antenna element can be optimized for its resonance frequency and taper amplitude, so to shape the aperture distribution for the cascaded elements. It will be shown that the array design based on the multiplication of element factor and array factor fails in case of patch width tapering, due to the inconsistency of the element patterns. To overcome this problem a line width tapering is suggested which keeps the element patterns nearly constant while still providing a broad amplitude taper range. A symmetric 10 element antenna array with a Chebyshev tapering (-20dB side lobe level) operating at 5.8 GHz has been designed, compared for the two tapering methods and validated with measurement.",
"title": ""
}
] |
scidocsrr
|
e8bbe717500b0fb201be13a68456ecd4
|
Understanding the Digital Marketing Environment with KPIs and Web Analytics
|
[
{
"docid": "0994065c757a88373a4d97e5facfee85",
"text": "Scholarly literature suggests digital marketing skills gaps in industry, but these skills gaps are not clearly identified. The research aims to specify any digital marketing skills gaps encountered by professionals working in communication industries. In-depth interviews were undertaken with 20 communication industry professionals. A focus group followed, testing the rigour of the data. We find that a lack of specific technical skills; a need for best practice guidance on evaluation metrics, and a lack of intelligent futureproofing for dynamic technological change and development are skills gaps currently challenging the communication industry. However, the challenge of integrating digital marketing approaches with established marketing practice emerges as the key skills gap. Emerging from the key findings, a Digital Marketer Model was developed, highlighting the key competencies and skills needed by an excellent digital marketer. The research concludes that guidance on best practice, focusing upon evaluation metrics, futureproofing and strategic integration, needs to be developed for the communication industry. The Digital Marketing Model should be subject to further testing in industry and academia. Suggestions for further research are discussed.",
"title": ""
}
] |
[
{
"docid": "76efa42a492d8eb36b82397e09159c30",
"text": "attempt to foster AI and intelligent robotics research by providing a standard problem where a wide range of technologies can be integrated and examined. The first RoboCup competition will be held at the Fifteenth International Joint Conference on Artificial Intelligence in Nagoya, Japan. A robot team must actually perform a soccer game, incorporating various technologies, including design principles of autonomous agents, multiagent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. Although RoboCup’s final target is a world cup with real robots, RoboCup offers a software platform for research on the software aspects of RoboCup. This article describes technical challenges involved in RoboCup, rules, and the simulation environment.",
"title": ""
},
{
"docid": "1d26fc3a5f07e7ea678753e7171846c4",
"text": "Data uncertainty is an inherent property in various applications due to reasons such as outdated sources or imprecise measurement. When data mining techniques are applied to these data, their uncertainty has to be considered to obtain high quality results. We present UK-means clustering, an algorithm that enhances the K-means algorithm to handle data uncertainty. We apply UKmeans to the particular pattern of moving-object uncertainty. Experimental results show that by considering uncertainty, a clustering algorithm can produce more accurate results.",
"title": ""
},
{
"docid": "711daac04e27d0a413c99dd20f6f82e1",
"text": "The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative positions of joints (PO), temporal differences (TD), and normalized trajectories of motion (NT). Given these features a hybrid multi-layer perceptron is trained, which simultaneously classifies and reconstructs input data. We use deep autoencoder to visualize learnt features. The experiments show that deep neural networks can capture more discriminative information than, for instance, principal component analysis can. We test our system on a public database with 65 classes and more than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our knowledge, the state of the art result for such a large dataset.",
"title": ""
},
{
"docid": "b93455e6b023910bf7711d56d16f62a2",
"text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.",
"title": ""
},
{
"docid": "6a8afd6713425e7dc047da08d7c4c773",
"text": "We present the first linear time (1 + /spl epsiv/)-approximation algorithm for the k-means problem for fixed k and /spl epsiv/. Our algorithm runs in O(nd) time, which is linear in the size of the input. Another feature of our algorithm is its simplicity - the only technique involved is random sampling.",
"title": ""
},
{
"docid": "93133be6094bba6e939cef14a72fa610",
"text": "We systematically searched available databases. We reviewed 6,143 studies published from 1833 to 2017. Reports in English, French, German, Italian, and Spanish were considered, as were publications in other languages if definitive treatment and recurrence at specific follow-up times were described in an English abstract. We assessed data in the manner of a meta-analysis of RCTs; further we assessed non-RCTs in the manner of a merged data analysis. In the RCT analysis including 11,730 patients, Limberg & Dufourmentel operations were associated with low recurrence of 0.6% (95%CI 0.3–0.9%) 12 months and 1.8% (95%CI 1.1–2.4%) respectively 24 months postoperatively. Analysing 89,583 patients from RCTs and non-RCTs, the Karydakis & Bascom approaches were associated with recurrence of only 0.2% (95%CI 0.1–0.3%) 12 months and 0.6% (95%CI 0.5–0.8%) 24 months postoperatively. Primary midline closure exhibited long-term recurrence up to 67.9% (95%CI 53.3–82.4%) 240 months post-surgery. For most procedures, only a few RCTs without long term follow up data exist, but substitute data from numerous non-RCTs are available. Recurrence in PSD is highly dependent on surgical procedure and by follow-up time; both must be considered when drawing conclusions regarding the efficacy of a procedure.",
"title": ""
},
{
"docid": "3688c987419daade77c44912fbc72ecf",
"text": "We propose a visual food recognition framework that integrates the inherent semantic relationships among fine-grained classes. Our method learns semantics-aware features by formulating a multi-task loss function on top of a convolutional neural network (CNN) architecture. It then refines the CNN predictions using a random walk based smoothing procedure, which further exploits the rich semantic information. We evaluate our algorithm on a large \"food-in-the-wild\" benchmark, as well as a challenging dataset of restaurant food dishes with very few training images. The proposed method achieves higher classification accuracy than a baseline which directly fine-tunes a deep learning network on the target dataset. Furthermore, we analyze the consistency of the learned model with the inherent semantic relationships among food categories. Results show that the proposed approach provides more semantically meaningful results than the baseline method, even in cases of mispredictions.",
"title": ""
},
{
"docid": "566a2b2ff835d10e0660fb89fd6ae618",
"text": "We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB) and in the narrow sense (FLN). FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).",
"title": ""
},
{
"docid": "72345bf404d21d0f7aa1e54a5710674c",
"text": "Many real-world data sets exhibit skewed class distributions in which almost all cases are allotted to a class and far fewer cases to a smaller, usually more interesting class. A classifier induced from an imbalanced data set has, typically, a low error rate for the majority class and an unacceptable error rate for the minority class. This paper firstly provides a systematic study on the various methodologies that have tried to handle this problem. Finally, it presents an experimental study of these methodologies with a proposed mixture of expert agents and it concludes that such a framework can be a more effective solution to the problem. Our method seems to allow improved identification of difficult small classes in predictive analysis, while keeping the classification ability of the other classes in an acceptable level.",
"title": ""
},
{
"docid": "b23d73e29fc205df97f073eb571a2b47",
"text": "In this paper, we study two different trajectory planning problems for robotmanipulators. In the first case, the end-effector of the robot is constrained to move along a prescribed path in the workspace, whereas in the second case, the trajectory of the end-effector has to be determined in the presence of obstacles. Constraints of this type are called holonomic constraints. Both problems have been solved as optimal control problems. Given the dynamicmodel of the robotmanipulator, the initial state of the system, some specifications about the final state and a set of holonomic constraints, one has to find the trajectory and the actuator torques that minimize the energy consumption during the motion. The presence of holonomic constraints makes the optimal control problem particularly difficult to solve. Our method involves a numerical resolution of a reformulation of the constrained optimal control problem into an unconstrained calculus of variations problem in which the state space constraints and the dynamic equations, also regarded as constraints, are treated by means of special derivative multipliers. We solve the resulting calculus of variations problem using a numerical approach based on the Euler–Lagrange necessary condition in the integral form in which time is discretized and admissible variations for each variable are approximated using a linear combination of piecewise continuous basis functions of time. The use of the Euler–Lagrange necessary condition in integral form avoids the need for numerical corner conditions and thenecessity of patching together solutions between corners. In thisway, a generalmethod for the solution of constrained optimal control problems is obtained inwhich holonomic constraints can be easily treated. Numerical results of the application of thismethod to trajectory planning of planar horizontal robot manipulators with two revolute joints are reported. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5cd726f49dd0cb94fe7d2d724da9f215",
"text": "We implement pedestrian dead reckoning (PDR) for indoor localization. With a waist-mounted PDR based system on a smart-phone, we estimate the user's step length that utilizes the height change of the waist based on the Pythagorean Theorem. We propose a zero velocity update (ZUPT) method to address sensor drift error: Simple harmonic motion and a low-pass filtering mechanism combined with the analysis of gait characteristics. This method does not require training to develop the step length model. Exploiting the geometric similarity between the user trajectory and the floor map, our map matching algorithm includes three different filters to calibrate the direction errors from the gyro using building floor plans. A sliding-window-based algorithm detects corners. The system achieved 98% accuracy in estimating user walking distance with a waist-mounted phone and 97% accuracy when the phone is in the user's pocket. ZUPT improves sensor drift error (the accuracy drops from 98% to 84% without ZUPT) using 8 Hz as the cut-off frequency to filter out sensor noise. Corner length impacted the corner detection algorithm. In our experiments, the overall location error is about 0.48 meter.",
"title": ""
},
{
"docid": "dc18c0e5737b3d641418e5b33dd3f0e7",
"text": "Millimeter wave (mmWave) communications have recently attracted large research interest, since the huge available bandwidth can potentially lead to the rates of multiple gigabit per second per user. Though mmWave can be readily used in stationary scenarios, such as indoor hotspots or backhaul, it is challenging to use mmWave in mobile networks, where the transmitting/receiving nodes may be moving, channels may have a complicated structure, and the coordination among multiple nodes is difficult. To fully exploit the high potential rates of mmWave in mobile networks, lots of technical problems must be addressed. This paper presents a comprehensive survey of mmWave communications for future mobile networks (5G and beyond). We first summarize the recent channel measurement campaigns and modeling results. Then, we discuss in detail recent progresses in multiple input multiple output transceiver design for mmWave communications. After that, we provide an overview of the solution for multiple access and backhauling, followed by the analysis of coverage and connectivity. Finally, the progresses in the standardization and deployment of mmWave for mobile networks are discussed.",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "3e7941e6d2e5c2991030950d2a13d48f",
"text": "Mobile edge cloud (MEC) is a model for enabling on-demand elastic access to, or an interaction with a shared pool of reconfigurable computing resources such as servers, storage, peer devices, applications, and services, at the edge of the wireless network in close proximity to mobile users. It overcomes some obstacles of traditional central clouds by offering wireless network information and local context awareness as well as low latency and bandwidth conservation. This paper presents a comprehensive survey of MEC systems, including the concept, architectures, and technical enablers. First, the MEC applications are explored and classified based on different criteria, the service models and deployment scenarios are reviewed and categorized, and the factors influencing the MEC system design are discussed. Then, the architectures and designs of MEC systems are surveyed, and the technical issues, existing solutions, and approaches are presented. The open challenges and future research directions of MEC are further discussed.",
"title": ""
},
{
"docid": "8c662416784ddaf8dae387926ba0b17c",
"text": "Autoimmune reactions to vaccinations may rarely be induced in predisposed individuals by molecular mimicry or bystander activation mechanisms. Autoimmune reactions reliably considered vaccine-associated, include Guillain-Barré syndrome after 1976 swine influenza vaccine, immune thrombocytopenic purpura after measles/mumps/rubella vaccine, and myopericarditis after smallpox vaccination, whereas the suspected association between hepatitis B vaccine and multiple sclerosis has not been further confirmed, even though it has been recently reconsidered, and the one between childhood immunization and type 1 diabetes seems by now to be definitively gone down. Larger epidemiological studies are needed to obtain more reliable data in most suggested associations.",
"title": ""
},
{
"docid": "9f40a57159a06ecd9d658b4d07a326b5",
"text": "_____________________________________________________________________________ The aim of the present study was to investigate a cytotoxic oxidative cell stress related and the antioxidant profile of kaempferol, quercetin, and isoquercitrin. The flavonol compounds were able to act as scavengers of superoxide anion (but not hydrogen peroxide), hypochlorous acid, chloramine and nitric oxide. Although flavonoids are widely described as antioxidants and this activity is generally related to beneficial effects on human health, here we show important cytotoxic actions of three well known flavonoids. They were able to promote hemolysis which one was exacerbated on the presence of hypochlorous acid but not by AAPH radical. Therefore, WWW.SCIELO.BR/EQ VOLUME 36, NÚMERO 2, 2011",
"title": ""
},
{
"docid": "4129d2906d3d3d96363ff0812c8be692",
"text": "In this paper, we propose a picture recommendation system built on Instagram, which facilitates users to query correlated pictures by keying in hashtags or clicking images. Users can access the value-added information (or pictures) on Instagram through the recommendation platform. In addition to collecting available hashtags using the Instagram API, the system also uses the Free Dictionary to build the relationships between all the hashtags in a knowledge base. Thus, two kinds of correlations can be provided for a query in the system; i.e., user-defined correlation and system-defined correlation. Finally, the experimental results show that users have good satisfaction degrees with both user-defined correlation and system-defined correlation methods.",
"title": ""
},
{
"docid": "8e28f1561b3a362b2892d7afa8f2164c",
"text": "Inference based techniques are one of the major approaches to analyze DNS data and detecting malicious domains. The key idea of inference techniques is to first define associations between domains based on features extracted from DNS data. Then, an inference algorithm is deployed to infer potential malicious domains based on their direct/indirect associations with known malicious ones. The way associations are defined is key to the effectiveness of an inference technique. It is desirable to be both accurate (i.e., avoid falsely associating domains with no meaningful connections) and with good coverage (i.e., identify all associations between domains with meaningful connections). Due to the limited scope of information provided by DNS data, it becomes a challenge to design an association scheme that achieves both high accuracy and good coverage. In this paper, we propose a new association scheme to identify domains controlled by the same entity. Our key idea is an indepth analysis of active DNS data to accurately separate public IPs from dedicated ones, which enables us to build high-quality associations between domains. Our scheme avoids the pitfall of naive approaches that rely on weak “co-IP” relationship of domains (i.e., two domains are resolved to the same IP) that results in low detection accuracy, and, meanwhile, identifies many meaningful connections between domains that are discarded by existing state-of-the-art approaches. Our experimental results show that the proposed association scheme not only significantly improves the domain coverage compared to existing approaches but also achieves better detection accuracy. Existing path-based inference algorithm is specifically designed for DNS data analysis. It is effective but computationally expensive. To further demonstrate the strength of our domain association scheme as well as improving inference efficiency, we investigate the effectiveness of combining our association scheme with the generic belief propagation algorithm. Through comprehensive experiments, we show that this approach offers significant efficiency and scalability improvement with only minor negative impact of detection accuracy, which suggests that such a combination could offer a good tradeoff for malicious domain detection in practice.",
"title": ""
},
{
"docid": "acfdfe2de61ec2697ef865b1e5a42721",
"text": "Artificial Immune System (AIS) algorithm is a novel and vibrant computational paradigm, enthused by the biological immune system. Over the last few years, the artificial immune system has been sprouting to solve numerous computational and combinatorial optimization problems. In this paper, we introduce the restricted MAX-kSAT as a constraint optimization problem that can be solved by a robust computational technique. Hence, we will implement the artificial immune system algorithm incorporated with the Hopfield neural network to solve the restricted MAX-kSAT problem. The proposed paradigm will be compared with the traditional method, Brute force search algorithm integrated with Hopfield neural network. The results demonstrate that the artificial immune system integrated with Hopfield network outperforms the conventional Hopfield network in solving restricted MAX-kSAT. All in all, the result has provided a concrete evidence of the effectiveness of our proposed paradigm to be applied in other constraint optimization problem. The work presented here has many profound implications for future studies to counter the variety of satisfiability problem.",
"title": ""
}
] |
scidocsrr
|
05f6e4fdb42dd18d88d41154e81b04c4
|
An overview of topic modeling and its current applications in bioinformatics
|
[
{
"docid": "24bd9a2f85b33b93609e03fc67e9e3a9",
"text": "With the rapid development of high-throughput technologies, researchers can sequence the whole metagenome of a microbial community sampled directly from the environment. The assignment of these metagenomic reads into different species or taxonomical classes is a vital step for metagenomic analysis, which is referred to as binning of metagenomic data. In this paper, we propose a new method TM-MCluster for binning metagenomic reads. First, we represent each metagenomic read as a set of \"k-mers\" with their frequencies occurring in the read. Then, we employ a probabilistic topic model -- the Latent Dirichlet Allocation (LDA) model to the reads, which generates a number of hidden \"topics\" such that each read can be represented by a distribution vector of the generated topics. Finally, as in the MCluster method, we apply SKWIC -- a variant of the classical K-means algorithm with automatic feature weighting mechanism to cluster these reads represented by topic distributions. Experiments show that the new method TM-MCluster outperforms major existing methods, including AbundanceBin, MetaCluster 3.0/5.0 and MCluster. This result indicates that the exploitation of topic modeling can effectively improve the binning performance of metagenomic reads.",
"title": ""
},
{
"docid": "46cd71806e85374c36bc77ea28293ecb",
"text": "In this paper we introduce a novel collapsed Gibbs sampling method for the widely used latent Dirichlet allocation (LDA) model. Our new method results in significant speedups on real world text corpora. Conventional Gibbs sampling schemes for LDA require O(K) operations per sample where K is the number of topics in the model. Our proposed method draws equivalent samples but requires on average significantly less then K operations per sample. On real-word corpora FastLDA can be as much as 8 times faster than the standard collapsed Gibbs sampler for LDA. No approximations are necessary, and we show that our fast sampling scheme produces exactly the same results as the standard (but slower) sampling scheme. Experiments on four real world data sets demonstrate speedups for a wide range of collection sizes. For the PubMed collection of over 8 million documents with a required computation time of 6 CPU months for LDA, our speedup of 5.7 can save 5 CPU months of computation.",
"title": ""
},
{
"docid": "99549d037b403f78f273b3c64181fd21",
"text": "From social media has emerged continuous needs for automatic travel recommendations. Collaborative filtering (CF) is the most well-known approach. However, existing approaches generally suffer from various weaknesses. For example , sparsity can significantly degrade the performance of traditional CF. If a user only visits very few locations, accurate similar user identification becomes very challenging due to lack of sufficient information for effective inference. Moreover, existing recommendation approaches often ignore rich user information like textual descriptions of photos which can reflect users' travel preferences. The topic model (TM) method is an effective way to solve the “sparsity problem,” but is still far from satisfactory. In this paper, an author topic model-based collaborative filtering (ATCF) method is proposed to facilitate comprehensive points of interest (POIs) recommendations for social users. In our approach, user preference topics, such as cultural, cityscape, or landmark, are extracted from the geo-tag constrained textual description of photos via the author topic model instead of only from the geo-tags (GPS locations). Advantages and superior performance of our approach are demonstrated by extensive experiments on a large collection of data.",
"title": ""
},
{
"docid": "209de57ac23ab35fa731b762a10f782a",
"text": "Although fully generative models have been successfully used to model the contents of text documents, they are often awkward to apply to combinations of text data and document metadata. In this paper we propose a Dirichlet-multinomial regression (DMR) topic model that includes a log-linear prior on document-topic distributions that is a function of observed features of the document, such as author, publication venue, references, and dates. We show that by selecting appropriate features, DMR topic models can meet or exceed the performance of several previously published topic models designed for specific data.",
"title": ""
}
] |
[
{
"docid": "b21731976ad0218896682fe236fa42c6",
"text": "In recent years there has been an exponential rise in the number of studies employing transcranial direct current stimulation (tDCS) as a means of gaining a systems-level understanding of the cortical substrates underlying behaviour. These advances have allowed inferences to be made regarding the neural operations that shape perception, cognition, and action. Here we summarise how tDCS works, and show how research using this technique is expanding our understanding of the neural basis of cognitive and motor training. We also explain how oscillatory tDCS can elucidate the role of fluctuations in neural activity, in both frequency and phase, in perception, learning, and memory. Finally, we highlight some key methodological issues for tDCS and suggest how these can be addressed.",
"title": ""
},
{
"docid": "f60186d137156ba97a6a04c1b960d1a0",
"text": "One of the core performing courses in institutions for pre-school teacher training is simultaneous singing and piano playing. To ensure sufficient training hours, it is important to improve teaching methods. As a way to improve the teaching of simultaneous signing and piano playing in a large class, we have incorporated blended learning, in which students are required (1) to submit videos of their performance, and (2) to view and study e-learning materials. We have analyzed how each of these requirements improved students !Gperformance skills in singing and piano playing, and found that they substantially reduce the time required for individual lessons.",
"title": ""
},
{
"docid": "5dc9e4d518ba502492f8af7f6a3506f4",
"text": "Extraction of map objects such roads, rivers and buildings from high resolution satellite imagery is an important task in many civilian and military applications. We present a semi-automatic approach for road detection that achieves high accuracy and efficiency. This method exploits the properties of road segments to develop customized operators to accurately derive the road segments. The customized operators include directional morphological enhancement, directional segmentation and thinning. We have systematically evaluated the algorithm on a variety of images from IKONOS, QuickBird, CARTOSAT-2A satellites and carefully compared it with the techniques presented in literature. The results demonstrate that the algorithm proposed is both accurate and efficient.",
"title": ""
},
{
"docid": "b6d8ba656a85955be9b4f34b07f54987",
"text": "In real-world data, e.g., from Web forums, text is often contaminated with redundant or irrelevant content, which leads to introducing noise in machine learning algorithms. In this paper, we apply Long Short-Term Memory networks with an attention mechanism, which can select important parts of text for the task of similar question retrieval from community Question Answering (cQA) forums. In particular, we use the attention weights for both selecting entire sentences and their subparts, i.e., word/chunk, from shallow syntactic trees. More interestingly, we apply tree kernels to the filtered text representations, thus exploiting the implicit features of the subtree space for learning question reranking. Our results show that the attention-based pruning allows for achieving the top position in the cQA challenge of SemEval 2016, with a relatively large gap from the other participants while greatly decreasing running time.",
"title": ""
},
{
"docid": "4b0eec16de82592d1f7c715ad25905a9",
"text": "We present a computational model for solving Raven’s Progressive Matrices. This model combines qualitative spatial representations with analogical comparison via structuremapping. All representations are automatically computed by the model. We show that it achieves a level of performance on the Standard Progressive Matrices that is above that of most adults, and that the problems it fails on are also the hardest for people.",
"title": ""
},
{
"docid": "67bbd10e1ed9201fb589e16c58ae76ce",
"text": "Author name disambiguation has been one of the hardest problems faced by digital libraries since their early days. Historically, supervised solutions have empirically outperformed those based on heuristics, but with the burden of having to rely on manually labeled training sets for the learning process. Moreover, most supervised solutions just apply some type of generic machine learning solution and do not exploit specific knowledge about the problem. In this article, we follow a similar reasoning, but in the opposite direction. Instead of extending an existing supervised solution, we propose a set of carefully designed heuristics and similarity functions, and apply supervision only to optimize such parameters for each particular dataset. As our experiments show, the result is a very effective, efficient and practical author name disambiguation method that can be used in many different scenarios. In fact, we show that our method can beat state-of-the-art supervised methods in terms of effectiveness in many situations while being orders of magnitude faster. It can also run without any training information, using only default parameters, and still be very competitive when compared to these supervised methods (beating several of them) and better than most existing unsupervised author name disambiguation solutions.",
"title": ""
},
{
"docid": "e85cf5b993cc4d82a1dea47f9ce5d18b",
"text": "We recently proposed an approach inspired by Sparse Component Analysis for real-time localisation of multiple sound sources using a circular microphone array. The method was based on identifying time-frequency zones where only one source is active, reducing the problem to single-source localisation in these zones. A histogram of estimated Directions of Arrival (DOAs) was formed and then processed to obtain improved DOA estimates, assuming that the number of sources was known. In this paper, we extend our previous work by proposing a new method for the final DOA estimations, that outperforms our previous method at lower SNRs and in the case of six simultaneous speakers. In keeping with the spirit of our previous work, the new method is very computationally efficient, facilitating its use in real-time systems.",
"title": ""
},
{
"docid": "c1538df6d2aa097d5c4a8c4fc7e42d01",
"text": "During the First International EEG Congress, London in 1947, it was recommended that Dr. Herbert H. Jasper study methods to standardize the placement of electrodes used in EEG (Jasper 1958). A report with recommendations was to be presented to the Second International Congress in Paris in 1949. The electrode placement systems in use at various centers were found to be similar, with only minor differences, although their designations, letters and numbers were entirely different. Dr. Jasper established some guidelines which would be established in recommending a speci®c system to the federation and these are listed below.",
"title": ""
},
{
"docid": "c676d1a252a26d7a803d5f81c5787f69",
"text": "Ravi.P Head, Department of computer science, Govt .Arts College for Women, Ramanathapuram E-Mail: [email protected] Tamilselvi.S Department of computer science, Govt .Arts College for Women, Ramanathapuram E-Mail: [email protected] -------------------------------------------------------------------ABSTRACT--------------------------------------------------------Images are an important form of data and are used in almost every application. Images occupy large amount of memory space. Image compression is most essential requirement for efficient utilization of storage space and transmission bandwidth. Image compression technique involves reducing the size of the image without degrading the quality of the image. A restriction on these methods is the high computational cost of image compression. Ant colony optimization is applied for image compression. An analogy with the real ants' behavior was presented as a new paradigm called Ant Colony Optimization (ACO). ACO is Probabilistic technique for Searching for optimal path in the graph based on behavior of ants seeking a path between their colony and source of food. The main features of ACO are the fast search of good solutions, parallel work and use of heuristic information, among others. Ant colony optimization (ACO) is a technique which can be used for various applications. This paper provides an insight optimization techniques used for image compression like Ant Colony Optimization (ACO) algorithm.",
"title": ""
},
{
"docid": "5b5d4c33a600d93b8b999a51318980da",
"text": "In this work, we focused on liveness detection for facial recognition system's spoofing via fake face movement. We have developed a pupil direction observing system for anti-spoofing in face recognition systems using a basic hardware equipment. Firstly, eye area is being extracted from real time camera by using Haar-Cascade Classifier with specially trained classifier for eye region detection. Feature points have extracted and traced for minimizing person's head movements and getting stable eye region by using Kanade-Lucas-Tomasi (KLT) algorithm. Eye area is being cropped from real time camera frame and rotated for a stable eye area. Pupils are extracted from eye area by using a new improved algorithm subsequently. After a few stable number of frames that has pupils, proposed spoofing algorithm selects a random direction and sends a signal to Arduino to activate that selected direction's LED on a square frame that has totally eight LEDs for each direction. After chosen LED has been activated, eye direction is observed whether pupil direction and LED's position matches. If the compliance requirement is satisfied, algorithm returns data that contains liveness information. Complete algorithm for liveness detection using pupil tracking is tested on volunteers and algorithm achieved high success ratio.",
"title": ""
},
{
"docid": "4017461db56ebe986c3cdf9eec11826a",
"text": "Software Defined Networking (SDN) is a promising paradigm to provide centralized traffic control. Multimedia traffic control based on SDN is crucial but challenging for Quality of Experience (QoE) optimization. It is very difficult to model and control multimedia traffic because solutions mainly depend on an understanding of the network environment, which is complicated and dynamic. Inspired by the recent advances in artificial intelligence (AI) technologies, we study the adaptive multimedia traffic control mechanism leveraging Deep Reinforcement Learning (DRL). This paradigm combines deep learning with reinforcement learning, which learns solely from rewards by trial-and-error. Results demonstrate that the proposed mechanism is able to control multimedia traffic directly from experience without referring to a mathematical model.",
"title": ""
},
{
"docid": "1ac1c6f30b0a306b7c9f643f83fb4731",
"text": "As a bridge to connect vision and language, visual relations between objects in the form of relation triplet $łangle subject,predicate,object\\rangle$, such as \"person-touch-dog'' and \"cat-above-sofa'', provide a more comprehensive visual content understanding beyond objects. In this paper, we propose a novel vision task named Video Visual Relation Detection (VidVRD) to perform visual relation detection in videos instead of still images (ImgVRD). As compared to still images, videos provide a more natural set of features for detecting visual relations, such as the dynamic relations like \"A-follow-B'' and \"A-towards-B'', and temporally changing relations like \"A-chase-B'' followed by \"A-hold-B''. However, VidVRD is technically more challenging than ImgVRD due to the difficulties in accurate object tracking and diverse relation appearances in video domain. To this end, we propose a VidVRD method, which consists of object tracklet proposal, short-term relation prediction and greedy relational association. Moreover, we contribute the first dataset for VidVRD evaluation, which contains 1,000 videos with manually labeled visual relations, to validate our proposed method. On this dataset, our method achieves the best performance in comparison with the state-of-the-art baselines.",
"title": ""
},
{
"docid": "e8638ac34f416ac74e8e77cdc206ef04",
"text": "The modular multilevel converter (M2C) has become an increasingly important topology in medium- and high-voltage applications. A limitation is that it relies on positive and negative half-cycles of the ac output voltage waveform to achieve charge balance on the submodule capacitors. To overcome this constraint a secondary power loop is introduced that exchanges power with the primary power loops at the input and output. Power is exchanged between the primary and secondary loops by using the principle of orthogonality of power flow at different frequencies. Two modular multilevel topologies are proposed to step up or step down dc in medium- and high-voltage dc applications: the tuned filter modular multilevel dc converter and the push-pull modular multilevel dc converter. An analytical simulation of the latter converter is presented to explain the operation.",
"title": ""
},
{
"docid": "1be78e7bc2998d7ff7103c36c9c9528f",
"text": "Gate assignment is an important decision making problem which involves multiple and conflict objectives in airport. In this paper, fuzzy model is proposed to handle two main objectives, minimizing the total walking distance for passengers and maximizing the robustness of assignment. The idle times of flight-to-gate are regarded as fuzzy variables, and whose membership degrees are used to express influence on robustness of assignment. Adjustment function on membership degree is introduced to transfer two objectives into one. Modified genetic algorithm is adopted to optimize the NP-hard problem. Finally, illustrative example is given to evaluate the performance of fuzzy model. Three distribution functions are tested?? and comparison with the method of fixed buffer time is given. Simulation results demonstrate the feasibility and effectiveness of proposed fuzzy method.",
"title": ""
},
{
"docid": "c8d5ca95f6cd66461729cfc03772f5d0",
"text": "Statistical relationalmodels combine aspects of first-order logic andprobabilistic graphical models, enabling them to model complex logical and probabilistic interactions between large numbers of objects. This level of expressivity comes at the cost of increased complexity of inference, motivating a new line of research in lifted probabilistic inference. By exploiting symmetries of the relational structure in themodel, and reasoning about groups of objects as awhole, lifted algorithms dramatically improve the run time of inference and learning. The thesis has five main contributions. First, we propose a new method for logical inference, called first-order knowledge compilation. We show that by compiling relational models into a new circuit language, hard inference problems become tractable to solve. Furthermore, we present an algorithm that compiles relational models into our circuit language. Second, we show how to use first-order knowledge compilation for statistical relational models, leading to a new state-of-the-art lifted probabilistic inference algorithm. Third, we develop a formal framework for exact lifted inference, including a definition in terms of its complexity w.r.t. the number of objects in the world. From this follows a first completeness result, showing that the two-variable class of statistical relational models always supports lifted inference. Fourth, we present an algorithm for",
"title": ""
},
{
"docid": "a3db8f51d9dfa6608677d63492d2fb6f",
"text": "In this article, we introduce nonlinear versions of the popular structure tensor, also known as second moment matrix. These nonlinear structure tensors replace the Gaussian smoothing of the classical structure tensor by discontinuity-preserving nonlinear diffusions. While nonlinear diffusion is a well-established tool for scalar and vector-valued data, it has not often been used for tensor images so far. Two types of nonlinear diffusion processes for tensor data are studied: an isotropic one with a scalar-valued diffusivity, and its anisotropic counterpart with a diffusion tensor. We prove that these schemes preserve the positive semidefiniteness of a matrix field and are, therefore, appropriate for smoothing structure tensor fields. The use of diffusivity functions of total variation (TV) type allows us to construct nonlinear structure tensors without specifying additional parameters compared to the conventional structure tensor. The performance of nonlinear structure tensors is demonstrated in three fields where the classic structure tensor is frequently used: orientation estimation, optic flow computation, and corner detection. In all these cases, the nonlinear structure tensors demonstrate their superiority over the classical linear one. Our experiments also show that for corner detection based on nonlinear structure tensors, anisotropic nonlinear tensors give the most precise localisation. q 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2c90d3f1c3ecd89b3e76d93afebd2371",
"text": "Crowdsourcing is an arising collaborative approach applicable among many other applications to the area of language and speech processing. In fact, the use of crowdsourcing was already applied in the field of speech processing with promising results. However, only few studies investigated the use of crowdsourcing in computational paralinguistics. In this contribution, we propose a novel evaluator for crowdsourced-based ratings termed Weighted Trustability Evaluator (WTE) which is computed from the rater-dependent consistency over the test questions. We further investigate the reliability of crowdsourced annotations as compared to the ones obtained with traditional labelling procedures, such as constrained listening experiments in laboratories or in controlled environments. This comparison includes an in-depth analysis of obtainable classification performances. The experiments were conducted on the Speaker Likability Database (SLD) already used in the INTERSPEECH Challenge 2012, and the results lend further weight to the assumption that crowdsourcing can be applied as a reliable annotation source for computational paralinguistics given a sufficient number of raters and suited measurements of their reliability.",
"title": ""
},
{
"docid": "32f2416b74baa4b35f853c21c75bbf90",
"text": "In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application domains. However, training a deep neural network requires a large amount of labeled data, which is an expensive process in terms of time, labor and human expertise. Domain adaptation or transfer learning algorithms address this challenge by leveraging labeled data in a different, but related source domain, to develop a model for the target domain. Further, the explosive growth of digital data has posed a fundamental challenge concerning its storage and retrieval. Due to its storage and retrieval efficiency, recent years have witnessed a wide application of hashing in a variety of computer vision applications. In this paper, we first introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms. The dataset contains images of a variety of everyday objects from multiple domains. We then propose a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data. To the best of our knowledge, this is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem. Our extensive empirical studies on multiple transfer tasks corroborate the usefulness of the framework in learning efficient hash codes which outperform existing competitive baselines for unsupervised domain adaptation.",
"title": ""
},
{
"docid": "ad4596e24f157653a36201767d4b4f3b",
"text": "We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain stateof-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging.",
"title": ""
},
{
"docid": "03b4b786ba40b4c631fe679b591880aa",
"text": "The abundance of user-generated data in social media has incentivized the development of methods to infer the latent attributes of users, which are crucially useful for personalization, advertising and recommendation. However, the current user profiling approaches have limited success, due to the lack of a principled way to integrate different types of social relationships of a user, and the reliance on scarcely-available labeled data in building a prediction model. In this paper, we present a novel solution termed Collective Semi-Supervised Learning (CSL), which provides a principled means to integrate different types of social relationship and unlabeled data under a unified computational framework. The joint learning from multiple relationships and unlabeled data yields a computationally sound and accurate approach to model user attributes in social media. Extensive experiments using Twitter data have demonstrated the efficacy of our CSL approach in inferring user attributes such as account type and marital status. We also show how CSL can be used to determine important user features, and to make inference on a larger user population.",
"title": ""
}
] |
scidocsrr
|
e6daa51a4ccdd300fbcba652271e3acb
|
Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions
|
[
{
"docid": "d7793313ab21020e79e41817b8372ee8",
"text": "We present a new approach to referring expression generation, casting it as a density estimation problem where the goal is to learn distributions over logical expressions identifying sets of objects in the world. Despite an extremely large space of possible expressions, we demonstrate effective learning of a globally normalized log-linear distribution. This learning is enabled by a new, multi-stage approximate inference technique that uses a pruning model to construct only the most likely logical forms. We train and evaluate the approach on a new corpus of references to sets of visual objects. Experiments show the approach is able to learn accurate models, which generate over 87% of the expressions people used. Additionally, on the previously studied special case of single object reference, we show a 35% relative error reduction over previous state of the art.",
"title": ""
},
{
"docid": "6664ed79a911247b401a4bd0b2cc619c",
"text": "Extracting good representations from images is essential for many computer vision tasks. In this paper, we propose hierarchical matching pursuit (HMP), which builds a feature hierarchy layer-by-layer using an efficient matching pursuit encoder. It includes three modules: batch (tree) orthogonal matching pursuit, spatial pyramid max pooling, and contrast normalization. We investigate the architecture of HMP, and show that all three components are critical for good performance. To speed up the orthogonal matching pursuit, we propose a batch tree orthogonal matching pursuit that is particularly suitable to encode a large number of observations that share the same large dictionary. HMP is scalable and can efficiently handle full-size images. In addition, HMP enables linear support vector machines (SVM) to match the performance of nonlinear SVM while being scalable to large datasets. We compare HMP with many state-of-the-art algorithms including convolutional deep belief networks, SIFT based single layer sparse coding, and kernel based feature learning. HMP consistently yields superior accuracy on three types of image classification problems: object recognition (Caltech-101), scene recognition (MIT-Scene), and static event recognition (UIUC-Sports).",
"title": ""
}
] |
[
{
"docid": "1a4cb9038d3bd71ecd24187ed860e0f7",
"text": "One of the most important fields in discrete mathematics is graph theory. Graph theory is discrete structures, consisting of vertices and edges that connect these vertices. Problems in almost every conceivable discipline can be solved using graph models. The field graph theory started its journey from the problem of Konigsberg Bridges in 1735. This paper is a guide for the applied mathematician who would like to know more about network security, cryptography and cyber security based of graph theory. The paper gives a brief overview of the subject and the applications of graph theory in computer security, and provides pointers to key research and recent survey papers in the area.",
"title": ""
},
{
"docid": "00eb132ce5063dd983c0c36724f82cec",
"text": "This paper analyzes customer product-choice behavior based on the recency and frequency of each customer’s page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression.",
"title": ""
},
{
"docid": "66e7979aff5860f713dffd10e98eed3d",
"text": "The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.1",
"title": ""
},
{
"docid": "31da7b5b403ca92dde4d4c590a900aa1",
"text": "In this paper, a new approach for moving an inpipe robot inside underground urban gas pipelines is proposed. Since the urban gas supply system is composed of complicated configurations of pipelines, the inpipe inspection requires a robot with outstanding mobility and corresponding control algorithms to apply for. In advance, this paper introduces a new miniature miniature inpipe robot, called MRINSPECT (Multifunctional Robotic crawler for INpipe inSPECTion) IV, which has been developed for the inspection of urban gas pipelines with a nominal 4-inch inside diameter. Its mechanism for steering with differential–drive wheels arranged three-dimensionally makes itself easily adjust to most pipeline configurations and provides excellent mobility in navigation. Also, analysis for pipelines with fittings are given in detail and geometries of the fittings are mathematically described. It is prerequisite to estimate moving pattern of the robot while passing through the fittings and based on the analysis, a method modulating speed of each drive wheel is proposed. Though modulation of speed is very important during proceeding thought the fittings, it is not easy to control the speeds because each wheel of the robot has contact with the walls having different curvatures. A new and simple way of controlling the speed is developed based on the analysis of the geometrical features of the fittings. This algorithm has the advantage to be applicable without using complicated sensor information. To confirm the effectiveness of the proposed method experiments are performed and additional considerations for the design of an inpipe robot are discussed.",
"title": ""
},
{
"docid": "e525a752409edc5165cfafed08ec6e57",
"text": "In this paper, we propose a recurrent neural network architecture for early sequence classification, when the model is required to output a label as soon as possible with negligible decline in accuracy. Our model is capable of learning how many sequence tokens it needs to observe in order to make a prediction; moreover, the number of steps required differs for each sequence. Experiments on sequential MNIST show that the proposed architecture focuses on different sequence parts during inference, which correspond to contours of the handwritten digits. We also demonstrate the improvement in the prediction quality with a simultaneous reduction in the prefix size used, the extent of which depends on the distribution of distinct class features over time.",
"title": ""
},
{
"docid": "db95a67e1c532badd3ec97a31170bb0c",
"text": "The named entity recognition task aims at identifying and classifying named entities within an open-domain text. This task has been garnering significant attention recently as it has been shown to help improve the performance of many natural language processing applications. In this paper, we investigate the impact of using different sets of features in three discriminative machine learning frameworks, namely, support vector machines, maximum entropy and conditional random fields for the task of named entity recognition. Our language of interest is Arabic. We explore lexical, contextual and morphological features and nine data-sets of different genres and annotations. We measure the impact of the different features in isolation and incrementally combine them in order to evaluate the robustness to noise of each approach. We achieve the highest performance using a combination of 15 features in conditional random fields using broadcast news data (Fbeta = 1=83.34).",
"title": ""
},
{
"docid": "2f20f587bb46f7133900fd8c22cea3ab",
"text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.",
"title": ""
},
{
"docid": "14739a86487a26452bd73da11264b9e4",
"text": "This paper presents a systematic online prediction method (Social-Forecast) that is capable to accurately forecast the popularity of videos promoted by social media. Social-Forecast explicitly considers the dynamically changing and evolving propagation patterns of videos in social media when making popularity forecasts, thereby being situation and context aware. Social-Forecast aims to maximize the forecast reward, which is defined as a tradeoff between the popularity prediction accuracy and the timeliness with which a prediction is issued. The forecasting is performed online and requires no training phase or a priori knowledge. We analytically bound the prediction performance loss of Social-Forecast as compared to that obtained by an omniscient oracle and prove that the bound is sublinear in the number of video arrivals, thereby guaranteeing its short-term performance as well as its asymptotic convergence to the optimal performance. In addition, we conduct extensive experiments using real-world data traces collected from the videos shared in RenRen, one of the largest online social networks in China. These experiments show that our proposed method outperforms existing view-based approaches for popularity prediction (which are not context-aware) by more than 30% in terms of prediction rewards.",
"title": ""
},
{
"docid": "08b01274311a5c07d726171f52a8513e",
"text": "This paper presents a brief introduction to Vapnik-Chervonenkis (VC) dimension, a quantity which characterizes the difficulty of distribution-independent learning. The paper establishes various elementary results, and discusses how to estimate the VC dimension in several examples of interest in neural network theory.",
"title": ""
},
{
"docid": "7fe2fa777e4206d7a57e785369e98aba",
"text": "A new class of three-dimensional (3-D) bandpass frequency-selective structures (FSSs) with multiple transmission zeros is presented to realize wide out-of-band rejection. The proposed FSSs are based on a two-dimensional (2-D) array of shielded microstrip lines with shorting via to ground, where two different resonators in the substrate are constructed based on the excited substrate mode. Furthermore, metallic plates of rectangular shape and “T-type” are inserted in the air region of shielded microstrip lines, which can introduce additional resonators provided by the air mode. Using this arrangement, a passband with two transmission poles can be obtained. Moreover, multiple transmission zeros outside the passband are produced for improving the out-of-band rejection. The operating principles of these FSSs are explained with the aid of equivalent circuit models. Two examples are designed, fabricated, and measured to verify the proposed structures and circuit models. Measured results demonstrate that the FSSs exhibit high out-of-band rejection and stable filtering response under a large variation of the incidence angle.",
"title": ""
},
{
"docid": "471eca6664d0ae8f6cdfb848bc910592",
"text": "Taxonomic relation identification aims to recognize the ‘is-a’ relation between two terms. Previous works on identifying taxonomic relations are mostly based on statistical and linguistic approaches, but the accuracy of these approaches is far from satisfactory. In this paper, we propose a novel supervised learning approach for identifying taxonomic relations using term embeddings. For this purpose, we first design a dynamic weighting neural network to learn term embeddings based on not only the hypernym and hyponym terms, but also the contextual information between them. We then apply such embeddings as features to identify taxonomic relations using a supervised method. The experimental results show that our proposed approach significantly outperforms other state-of-the-art methods by 9% to 13% in terms of accuracy for both general and specific domain datasets.",
"title": ""
},
{
"docid": "031dbd65ecb8d897d828cd5d904059c1",
"text": "Especially in ill-defined problems like complex, real-world tasks more than one way leads to a solution. Until now, the evaluation of information visualizations was often restricted to measuring outcomes only (time and error) or insights into the data set. A more detailed look into the processes which lead to or hinder task completion is provided by analyzing users' problem solving strategies. A study illustrates how they can be assessed and how this knowledge can be used in participatory design to improve a visual analytics tool. In order to provide the users a tool which functions as a real scaffold, it should allow them to choose their own path to Rome. We discuss how evaluation of problem solving strategies can shed more light on the users' \"exploratory minds\".",
"title": ""
},
{
"docid": "aecaa8c028c4d1098d44d755344ad2fc",
"text": "It is known that training deep neural networks, in particular, deep convolutional networks, with aggressively reduced numerical precision is challenging. The stochastic gradient descent algorithm becomes unstable in the presence of noisy gradient updates resulting from arithmetic with limited numeric precision. One of the wellaccepted solutions facilitating the training of low precision fixed point networks is stochastic rounding. However, to the best of our knowledge, the source of the instability in training neural networks with noisy gradient updates has not been well investigated. This work is an attempt to draw a theoretical connection between low numerical precision and training algorithm stability. In doing so, we will also propose and verify through experiments methods that are able to improve the training performance of deep convolutional networks in fixed point.",
"title": ""
},
{
"docid": "375de005698ccaf54d7b82875f1f16c5",
"text": "This paper describes design, Simulation and manufacturing procedures of HIRAD - a teleoperated Tracked Surveillance UGV for military, Rescue and other civilian missions in various hazardous environments. A Double Stabilizer Flipper mechanism mounted on front pulleys enables the Robot to have good performance in travelling over uneven terrains and climbing stairs. Using this Stabilizer flipper mechanism reduces energy consumption while climbing the stairs or crossing over obstacles. The locomotion system mechanical design is also described in detail. The CAD geometry 3D-model has been produced by CATIA software. To analyze the system mobility, a virtual model was developed with ADAMS Software. This simulation included different mobility maneuvers such as stair climbing, gap crossing and travelling over steep slopes. The simulations enabled us to define motor torque requirements. We performed many experiments with manufactured prototype under various terrain conditions Such as stair climbing, gap crossing and slope elevation. In experiments, HIRAD shows good overcoming ability for the tested terrain conditions.",
"title": ""
},
{
"docid": "90a3dd2bc75817a49a408e7666660e29",
"text": "RATIONALE\nPulmonary arterial hypertension (PAH) is an orphan disease for which the trend is for management in designated centers with multidisciplinary teams working in a shared-care approach.\n\n\nOBJECTIVE\nTo describe clinical and hemodynamic parameters and to provide estimates for the prevalence of patients diagnosed for PAH according to a standardized definition.\n\n\nMETHODS\nThe registry was initiated in 17 university hospitals following at least five newly diagnosed patients per year. All consecutive adult (> or = 18 yr) patients seen between October 2002 and October 2003 were to be included.\n\n\nMAIN RESULTS\nA total of 674 patients (mean +/- SD age, 50 +/- 15 yr; range, 18-85 yr) were entered in the registry. Idiopathic, familial, anorexigen, connective tissue diseases, congenital heart diseases, portal hypertension, and HIV-associated PAH accounted for 39.2, 3.9, 9.5, 15.3, 11.3, 10.4, and 6.2% of the population, respectively. At diagnosis, 75% of patients were in New York Heart Association functional class III or IV. Six-minute walk test was 329 +/- 109 m. Mean pulmonary artery pressure, cardiac index, and pulmonary vascular resistance index were 55 +/- 15 mm Hg, 2.5 +/- 0.8 L/min/m(2), and 20.5 +/- 10.2 mm Hg/L/min/m(2), respectively. The low estimates of prevalence and incidence of PAH in France were 15.0 cases/million of adult inhabitants and 2.4 cases/million of adult inhabitants/yr. One-year survival was 88% in the incident cohort.\n\n\nCONCLUSIONS\nThis contemporary registry highlights current practice and shows that PAH is detected late in the course of the disease, with a majority of patients displaying severe functional and hemodynamic compromise.",
"title": ""
},
{
"docid": "0575f79872ffd036d48efa731bc451e1",
"text": "When learning a new concept, not all training examples may prove equally useful for training: some may have higher or lower training value than others. The goal of this paper is to bring to the attention of the vision community the following considerations: (1) some examples are better than others for training detectors or classifiers, and (2) in the presence of better examples, some examples may negatively impact performance and removing them may be beneficial. In this paper, we propose an approach for measuring the training value of an example, and use it for ranking and greedily sorting examples. We test our methods on different vision tasks, models, datasets and classifiers. Our experiments show that the performance of current state-of-the-art detectors and classifiers can be improved when training on a subset, rather than the whole training set.",
"title": ""
},
{
"docid": "da7d45d2cbac784d31e4d3957f4799e6",
"text": "Malicious Uniform Resource Locator (URL) detection is an important problem in web search and mining, which plays a critical role in internet security. In literature, many existing studies have attempted to formulate the problem as a regular supervised binary classification task, which typically aims to optimize the prediction accuracy. However, in a real-world malicious URL detection task, the ratio between the number of malicious URLs and legitimate URLs is highly imbalanced, making it very inappropriate for simply optimizing the prediction accuracy. Besides, another key limitation of the existing work is to assume a large amount of training data is available, which is impractical as the human labeling cost could be potentially quite expensive. To solve these issues, in this paper, we present a novel framework of Cost-Sensitive Online Active Learning (CSOAL), which only queries a small fraction of training data for labeling and directly optimizes two cost-sensitive measures to address the class-imbalance issue. In particular, we propose two CSOAL algorithms and analyze their theoretical performance in terms of cost-sensitive bounds. We conduct an extensive set of experiments to examine the empirical performance of the proposed algorithms for a large-scale challenging malicious URL detection task, in which the encouraging results showed that the proposed technique by querying an extremely small-sized labeled data (about 0.5% out of 1-million instances) can achieve better or highly comparable classification performance in comparison to the state-of-the-art cost-insensitive and cost-sensitive online classification algorithms using a huge amount of labeled data.",
"title": ""
},
{
"docid": "96fbd665c43461b7cd8bbbe1f0aa43e4",
"text": "Inductor current sensing is becoming widely used in current programmed controllers for microprocessor applications. This method exploits a low-pass filter in parallel with the inductor to provide lossless current sense. A major drawback of inductor current sensing is that accurate sense the DC and AC components of the current signal requires precise matching between the low-pass filter time constant and the inductor time constant (L/RL). However, matching accuracy depends on the tolerance of the components and on the operating conditions; therefore it can hardly be guaranteed. To overcome this problem, a novel digital auto-tuning system is proposed that automatically compensates any time constant mismatch. This auto-tuning system has been developed for VRM current programmed controllers. It makes it possible to meet the adaptive voltage positioning requirements using conventional and low cost components, and to solve problems such as aging effects, temperature variations and process tolerances as well. A prototype of the auto-tuning system based on an FPGA and a commercial DC/DC controller has been designed and tested. The experimental results fully confirmed the effectiveness of the proposed method, showing an improvement of the current sense precision from about 30% up to 4%. This innovative solution is suitable to fulfill the challenging accuracy specifications required by the future VRM applications",
"title": ""
},
{
"docid": "66255dc6c741737b3576e7ddefec96ce",
"text": "Neural Machine Translation (NMT) with source side attention have achieved remarkable performance. however, there has been little work exploring to attend to the target side which can potentially enhance the memory capbility of NMT. We reformulate a Decoding-History Enhanced Attention mechanism (DHEA) to render NMT model better at selecting both source side and target side information. DHEA enables a dynamic control on the ratios at which source and target contexts contribute to the generation of target words, offering a way to weakly induce structure relations among both source and target tokens. It also allows training errors to be directly back-propagated through short-cut connections and effectively alleviates the gradient vanishing problem. The empirical study on Chinese-English translation shows that our model with proper configuration can improve by 0.9 BLEU upon Transformer and achieve the best reported results in the same dataset. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art NMT systems.",
"title": ""
},
{
"docid": "f1dc40c02d162988ca118c6e4d15ad06",
"text": "Spheres are popular geometric primitives found in many manufactured objects. However, sphere fitting and extraction have not been investigated in depth. In this paper, a robust method is proposed to extract multiple spheres accurately and simultaneously from unorganized point clouds. Moreover, a novel validation step is presented to assess the quality of the detected spheres, which help remove the confusion between perfect spheres and sphere-like shapes such as ellipsoids and paraboloids. A novel sampling strategy is introduced to reduce computational burden for sphere extraction. Experiments on both synthetic and scanned point clouds with different levels of noise and outliers are conducted and the results compared to state-of-the-art methods. These experiments demonstrate the efficiency and robustness of the proposed sphere extraction method.",
"title": ""
}
] |
scidocsrr
|
a97813f7695b044e2538b92cbaa58f34
|
Cost-Effective Resource Provisioning for MapReduce in a Cloud
|
[
{
"docid": "e4007c7e6a80006238e1211a213e391b",
"text": "Various techniques for multiprogramming parallel multiprocessor systems have been proposed recently as a way to improve performance. A natural approach is to divide the set of processing elements into independent partitions, and simultaneously execute a diierent parallel program in each partition. Several issues arise, including the determination of the optimal number of programs allowed to execute simultaneously (i.e., the number of partitions) and the corresponding partition sizes. This can be done statically, dynamically, or adaptively, depending on the system and workload characteristics. In this paper several adaptive partitioning policies are evaluated. Their behavior, as well as the behavior of static policies, is investigated using real parallel programs. The policy applicability to actual systems is addressed, and implementation results of the proposed policies on an iPSC/2 hypercube system are reported. The concept of robustness (i.e., the ability to perform well on a wide range of workload types over a wide range of arrival rates) is presented and quantiied. Relative rankings of the policies are obtained, depending on the speciic work-load characteristics. A trade-oo is shown between potential performance and the amount of knowledge of the workload characteristics required to select the best policy. A policy that performs best when such knowledge of workload parallelism and/or arrival rate is not available is proposed as the most robust of those analyzed.",
"title": ""
}
] |
[
{
"docid": "c1e0b1c318f73187c75be26f66d95632",
"text": "Newly emerged gallium nitride (GaN) devices feature ultrafast switching speed and low on-state resistance that potentially provide significant improvements for power converters. This paper investigates the benefits of GaN devices in an LLC resonant converter and quantitatively evaluates GaN devices' capabilities to improve converter efficiency. First, the relationship of device and converter design parameters to the device loss is established based on an analytical model of LLC resonant converter operating at the resonance. Due to the low effective output capacitance of GaN devices, the GaN-based design demonstrates about 50% device loss reduction compared with the Si-based design. Second, a new perspective on the extra transformer winding loss due to the asymmetrical primary-side and secondary-side current is proposed. The device and design parameters are tied to the winding loss based on the winding loss model in the finite element analysis (FEA) simulation. Compared with the Si-based design, the winding loss is reduced by 18% in the GaN-based design. Finally, in order to verify the GaN device benefits experimentally, 400- to 12-V, 300-W, 1-MHz GaN-based and Si-based LLC resonant converter prototypes are built and tested. One percent efficiency improvement, which is 24.8% loss reduction, is achieved in the GaN-based converter.",
"title": ""
},
{
"docid": "ef409ee79d73f9294daa8ac981de7a6d",
"text": "In this paper, we propose the amphibious influence maximization (AIM) model that combines traditional marketing via content providers and viral marketing to consumers in social networks in a single framework. In AIM, a set of content providers and consumers form a bipartite network while consumers also form their social network, and influence propagates from the content providers to consumers and among consumers in the social network following the independent cascade model. An advertiser needs to select a subset of seed content providers and a subset of seed consumers, such that the influence from the seed providers passing through the seed consumers could reach a large number of consumers in the social network in expectation.\n We prove that the AIM problem is NP-hard to approximate to within any constant factor via a reduction from Feige's k-prover proof system for 3-SAT5. We also give evidence that even when the social network graph is trivial (i.e. has no edges), a polynomial time constant factor approximation for AIM is unlikely. However, when we assume that the weighted bi-adjacency matrix that describes the influence of content providers on consumers is of constant rank, a common assumption often used in recommender systems, we provide a polynomial-time algorithm that achieves approximation ratio of (1-1/e-ε)3 for any (polynomially small) ε > 0. Our algorithmic results still hold for a more general model where cascades in social network follow a general monotone and submodular function.",
"title": ""
},
{
"docid": "81e0cc5f85857542c039b0c5fe80e010",
"text": "This paper proposes a pitch estimation algorithm that is based on optimal harmonic model fitting. The algorithm operates directly on the time-domain signal and has a relatively simple mathematical background. To increase its efficiency and accuracy, the algorithm is applied in combination with an autocorrelation-based initialization phase. For testing purposes we compare its performance on pitch-annotated corpora with several conventional time-domain pitch estimation algorithms, and also with a recently proposed one. The results show that even the autocorrelation-based first phase significantly outperforms the traditional methods, and also slightly the recently proposed yin algorithm. After applying the second phase – the harmonic approximation step – the amount of errors can be further reduced by about 20% relative to the error obtained in the first phase.",
"title": ""
},
{
"docid": "dd0bbc039e1bbc9e36ffe087e105cf56",
"text": "Using a comparative analysis approach, this article examines the development, characteristics and issues concerning the discourse of modern Asian art in the twentieth century, with the aim of bringing into picture the place of Asia in the history of modernism. The wide recognition of the Western modernist canon as centre and universal displaces the contribution and significance of the non-Western world in the modern movement. From a cross-cultural perspective, this article demonstrates that modernism in the field of visual arts in Asia, while has had been complex and problematic, nevertheless emerged. Rather than treating Asian art as a generalized subject, this article argues that, with their subtly different notions of culture, identity and nationhood, the modernisms that emerged from various nations in this region are diverse and culturally specific. Through the comparison of various art-historical contexts in this region (namely China, India, Japan and Korea), this article attempts to map out some similarities as well as differences in their pursuit of an autonomous modernist representation.",
"title": ""
},
{
"docid": "11004995f1ca07cd9fc721593c1c79a3",
"text": "This paper presents an efficient farfield simulation, exploiting and linking the strength of three commercial simulation tools. For many practical array and multiport antenna designs it is essential to examine the farfield for a general port excitation or termination scenario. Some examples are phased array designs, problems related to mutual coupling and scan blindness, tuning of parasitic elements, MIMO antennas and correlation. The proposed method fully characterizes the nearfield and the farfield of the antenna, so to compute farfield patterns by means of superposition for any voltage/current state at the port terminals. A recently published low-cost patch antenna phased array with analog beam steering by another group was found very suitable to demonstrate the proposed method.",
"title": ""
},
{
"docid": "7f8ee14d2d185798c3864178bd450f3d",
"text": "In this paper, a new sensing device that can simultaneously monitor traffic congestion and urban flash floods is presented. This sensing device is based on the combination of passive infrared sensors (PIRs) and ultrasonic rangefinder, and is used for real-time vehicle detection, classification, and speed estimation in the context of wireless sensor networks. This framework relies on dynamic Bayesian Networks to fuse heterogeneous data both spatially and temporally for vehicle detection. To estimate the speed of the incoming vehicles, we first use cross correlation and wavelet transform-based methods to estimate the time delay between the signals of different sensors. We then propose a calibration and self-correction model based on Bayesian Networks to make a joint inference by all sensors about the speed and the length of the detected vehicle. Furthermore, we use the measurements of the ultrasonic and the PIR sensors to perform vehicle classification. Validation data (using an experimental dual infrared and ultrasonic traffic sensor) show a 99% accuracy in vehicle detection, a mean error of 5 kph in vehicle speed estimation, a mean error of 0.7m in vehicle length estimation, and a high accuracy in vehicle classification. Finally, we discuss the computational performance of the algorithm, and show that this framework can be implemented on low-power computational devices within a wireless sensor network setting. Such decentralized processing greatly improves the energy consumption of the system and minimizes bandwidth usage.",
"title": ""
},
{
"docid": "ad9c5cbb46a83e2b517fb548baf83ce0",
"text": "Single-carrier frequency division multiple access (SC-FDMA) has been selected as the uplink access scheme in the UTRA Long Term Evolution (LTE) due to its low peak-to-average power ratio properties compared to orthogonal frequency division multiple access. Nevertheless, in order to achieve such a benefit, it requires a localized allocation of the resource blocks, which naturally imposes a severe constraint on the scheduler design. In this paper, three new channel-aware scheduling algorithms for SC-FDMA are proposed and evaluated in both local and wide area scenarios. Whereas the first maximum expansion (FME) and the recursive maximum expansion (RME) are relative simple solutions to the above-mentioned problem, the minimum area-difference to the envelope (MADE) is a more computational expensive approach, which, on the other hand, performs closer to the optimal combinatorial solution. Simulation results show that adopting a proportional fair metric all the proposed algorithms quickly reach a high level of data-rate fairness. At the same time, they definitely outperform the round-robin scheduling in terms of cell spectral efficiency with gains up to 68.8% in wide area environments.",
"title": ""
},
{
"docid": "3e749b561a67f2cc608f40b15c71098d",
"text": "As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field (such as, in the first place, the Web Ontology Language OWL) do not allow for the representation of concepts in terms of typical traits. The need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific disorders. We favour a hybrid approach to concept representation, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual space. As a preliminary step to apply our proposal to mental disorder concepts, we started to develop an OWL ontology of the schizophrenia spectrum, which is as close as possible to the DSM-5 descriptions.",
"title": ""
},
{
"docid": "60c887b5df030cc35ad805494d0d8c57",
"text": "Robots typically possess sensors of different modalities, such as colour cameras, inertial measurement units, and 3D laser scanners. Often, solving a particular problem becomes easier when more than one modality is used. However, while there are undeniable benefits to combine sensors of different modalities the process tends to be complicated. Segmenting scenes observed by the robot into a discrete set of classes is a central requirement for autonomy as understanding the scene is the first step to reason about future situations. Scene segmentation is commonly performed using either image data or 3D point cloud data. In computer vision many successful methods for scene segmentation are based on conditional random fields (CRF) where the maximum a posteriori (MAP) solution to the segmentation can be obtained by inference. In this paper we devise a new CRF inference method for scene segmentation that incorporates global constraints, enforcing the sets of nodes are assigned the same class label. To do this efficiently, the CRF is formulated as a relaxed quadratic program whose MAP solution is found using a gradient-based optimisation approach. The proposed method is evaluated on images and 3D point cloud data gathered in urban environments where image data provides the appearance features needed by the CRF, while the 3D point cloud data provides global spatial constraints over sets of nodes. Comparisons with belief propagation, conventional quadratic programming relaxation, and higher order potential CRF show the benefits of the proposed method.",
"title": ""
},
{
"docid": "9b70f2d928abefa3512cbcb97ab63abb",
"text": "Converging evidence suggests that each parahippocampal and hippocampal subregion contributes uniquely to the encoding, consolidation and retrieval of declarative memories, but their precise roles remain elusive. Current functional thinking does not fully incorporate the intricately connected networks that link these subregions, owing to their organizational complexity; however, such detailed anatomical knowledge is of pivotal importance for comprehending the unique functional contribution of each subregion. We have therefore developed an interactive diagram with the aim to display all of the currently known anatomical connections of the rat parahippocampal–hippocampal network. In this Review, we integrate the existing anatomical knowledge into a concise description of this network and discuss the functional implications of some relatively underexposed connections.",
"title": ""
},
{
"docid": "c10c8708b35aeac01d59ffe2c1d64f3e",
"text": "Social groups can be remarkably smart and knowledgeable when their averaged judgements are compared with the judgements of individuals. Already Galton [Galton F (1907) Nature 75:7] found evidence that the median estimate of a group can be more accurate than estimates of experts. This wisdom of crowd effect was recently supported by examples from stock markets, political elections, and quiz shows [Surowiecki J (2004) The Wisdom of Crowds]. In contrast, we demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks. In the experiment, subjects could reconsider their response to factual questions after having received average or full information of the responses of other subjects. We compare subjects' convergence of estimates and improvements in accuracy over five consecutive estimation periods with a control condition, in which no information about others' responses was provided. Although groups are initially \"wise,\" knowledge about estimates of others narrows the diversity of opinions to such an extent that it undermines the wisdom of crowd effect in three different ways. The \"social influence effect\" diminishes the diversity of the crowd without improvements of its collective error. The \"range reduction effect\" moves the position of the truth to peripheral regions of the range of estimates so that the crowd becomes less reliable in providing expertise for external observers. The \"confidence effect\" boosts individuals' confidence after convergence of their estimates despite lack of improved accuracy. Examples of the revealed mechanism range from misled elites to the recent global financial crisis.",
"title": ""
},
{
"docid": "c0767c58b4a5e81ddc35d045ccaa137f",
"text": "A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.",
"title": ""
},
{
"docid": "017d1bb9180e5d1f8a01604630ebc40d",
"text": "This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "9cab2a46c4189ebd0b67edbe5558d305",
"text": "We provide approximation algorithms for several variants of the Firefighter problem on general graphs. The Firefighter problem models the case where an infection or another diffusive process (such as an idea, a computer virus, or a fire) is spreading through a network, and our goal is to stop this infection by using targeted vaccinations. Specifically, we are allowed to vaccinate at most B nodes per time-step (for some budget B), with the goal of minimizing the effect of the infection. The difficulty of this problem comes from its temporal component, since we must choose nodes to vaccinate at every time-step while the infection is spreading through the network, leading to notions of “cuts over time”. We consider two versions of the Firefighter problem: a “non-spreading” model, where vaccinating a node means only that this node cannot be infected; and a “spreading” model where the vaccination itself is an infectious process, such as in the case where the infection is a harmful idea, and the vaccine to it is another infectious idea. We give complexity and approximation results for problems on both models.",
"title": ""
},
{
"docid": "2d42dfd45c0759cd795896179eea113c",
"text": "We present a neural-network based approach to classifying online hate speech in general, as well as racist and sexist speech in particular. Using pre-trained word embeddings and max/mean pooling from simple, fullyconnected transformations of these embeddings, we are able to predict the occurrence of hate speech on three commonly used publicly available datasets. Our models match or outperform state of the art F1 performance on all three datasets using significantly fewer parameters and minimal feature preprocessing compared to previous methods.",
"title": ""
},
{
"docid": "9ae780074520445bfe0df79532ee1c0d",
"text": "We propose a technique for achieving scalable blockchain consensus by means of a “sample-and-fallback” game: split transactions up into collations affecting small portions of the blockchain state, and require that in order for a collation of transactions to be valid, it must be approved by a randomly selected fixed-size sample taken from a large validator pool. In the exceptional case that a bad collation does pass through employ a mechanism by which a node can “challenge” an invalid collation and escalate the decision to a much larger set of validators. Our scheme is designed as a generalized overlay that can be applied to any underlying blockchain consensus algorithm (e.g. proof of work, proof of stake, social-network consensus, M-of-N semi-trusted validators) and almost any state transition function, provided that state changes are sufficiently “localized”. Our basic designs allow for a network with nodes bounded by O(N) computational power to process a transaction load and state size of O(N2− ), though we also propose an experimental “stacking” strategy for achieving arbitrary scalability guarantees up to a maximum of O(exp(N/k)) transactional load.",
"title": ""
},
{
"docid": "8aaa4ab4879ad55f43114cf8a0bd3855",
"text": "Photo-based activity on social networking sites has recently been identified as contributing to body image concerns. The present study aimed to investigate experimentally the effect of number of likes accompanying Instagram images on women's own body dissatisfaction. Participants were 220 female undergraduate students who were randomly assigned to view a set of thin-ideal or average images paired with a low or high number of likes presented in an Instagram frame. Results showed that exposure to thin-ideal images led to greater body and facial dissatisfaction than average images. While the number of likes had no effect on body dissatisfaction or appearance comparison, it had a positive effect on facial dissatisfaction. These effects were not moderated by Instagram involvement, but greater investment in Instagram likes was associated with more appearance comparison and facial dissatisfaction. The results illustrate how the uniquely social interactional aspects of social media (e.g., likes) can affect body image.",
"title": ""
},
{
"docid": "23989e6276ad8e60b0a451e3e9d5fe50",
"text": "The significant benefits associated with microgrids have led to vast efforts to expand their penetration in electric power systems. Although their deployment is rapidly growing, there are still many challenges to efficiently design, control, and operate microgrids when connected to the grid, and also when in islanded mode, where extensive research activities are underway to tackle these issues. It is necessary to have an across-the-board view of the microgrid integration in power systems. This paper presents a review of issues concerning microgrids and provides an account of research in areas related to microgrids, including distributed generation, microgrid value propositions, applications of power electronics, economic issues, microgrid operation and control, microgrid clusters, and protection and communications issues.",
"title": ""
},
{
"docid": "b836df8acd489acae10dbd8d58f6a8b3",
"text": "This paper presents a benchmark dataset for the task of inter-sentence relation extraction. The paper explains the distant supervision method followed for creating the dataset for inter-sentence relation extraction, involving relations previously used for standard intrasentence relation extraction task. The study evaluates baseline models such as bag-of-words and sequence based recurrent neural network models on the developed dataset and shows that recurrent neural network models are more useful for the task of intra-sentence relation extraction. Comparing the results of the present work on iner-sentence relation extraction with previous work on intra-sentence relation extraction, the study suggests the need for more sophisticated models to handle long-range information between entities across sentences.",
"title": ""
},
{
"docid": "688848d25ef154a797f85e03987b795f",
"text": "In this paper, we propose an omnidirectional mobile mechanism with surface contact. This mechanism is expected to perform on rough terrain and weak ground at disaster sites. In the discussion on the drive mechanism, we explain how a two axes orthogonal drive transmission system is important and we propose a principle drive mechanism for omnidirectional motion. In addition, we demonstrated that the proposed drive mechanism has potential for omnidirectional movement on rough ground by conducting experiments with prototypes.",
"title": ""
}
] |
scidocsrr
|
3a3f699a6eddedfeda60e09c59854499
|
ECG Beats Classification Using Mixture of Features
|
[
{
"docid": "45be193fe04064886615367dd9225c92",
"text": "Automatic electrocardiogram (ECG) beat classification is essential to timely diagnosis of dangerous heart conditions. Specifically, accurate detection of premature ventricular contractions (PVCs) is imperative to prepare for the possible onset of life-threatening arrhythmias. Although many groups have developed highly accurate algorithms for detecting PVC beats, results have generally been limited to relatively small data sets. Additionally, many of the highest classification accuracies (>90%) have been achieved in experiments where training and testing sets overlapped significantly. Expanding the overall data set greatly reduces overall accuracy due to significant variation in ECG morphology among different patients. As a result, we believe that morphological information must be coupled with timing information, which is more constant among patients, in order to achieve high classification accuracy for larger data sets. With this approach, we combined wavelet-transformed ECG waves with timing information as our feature set for classification. We used select waveforms of 18 files of the MIT/BIH arrhythmia database, which provides an annotated collection of normal and arrhythmic beats, for training our neural-network classifier. We then tested the classifier on these 18 training files as well as 22 other files from the database. The accuracy was 95.16% over 93,281 beats from all 40 files, and 96.82% over the 22 files outside the training set in differentiating normal, PVC, and other beats",
"title": ""
}
] |
[
{
"docid": "801a197f630189ab0a9b79d3cbfe904b",
"text": "Historically, Vivaldi arrays are known to suffer from high cross-polarization when scanning in the nonprincipal planes—a fault without a universal solution. In this paper, a solution to this issue is proposed in the form of a new Vivaldi-type array with low cross-polarization termed the Sliced Notch Antenna (SNA) array. For the first proof-of-concept demonstration, simulations and measurements are comparatively presented for two single-polarized <inline-formula> <tex-math notation=\"LaTeX\">$19 \\times 19$ </tex-math></inline-formula> arrays—the proposed SNA and its Vivaldi counterpart—each operating over a 1.2–12 GHz (10:1) band. Both arrays are built using typical vertically integrated printed-circuit board cards, and are designed to exhibit VSWR < 2.5 within a 60° scan cone over most of the 10:1 band as infinite arrays. Measurement results compare very favorably with full-wave finite array simulations that include array truncation effects. The SNA array element demonstrates well-behaved polarization performance versus frequency, with more than 20 dB of D-plane <inline-formula> <tex-math notation=\"LaTeX\">$\\theta \\!=\\!45 {^{\\circ }}$ </tex-math></inline-formula> polarization purity improvement at the high frequency. Moreover, the SNA element also: 1) offers better suppression of classical Vivaldi E-plane scan blindnesses; 2) requires fewer plated through vias for stripline-based designs; and 3) allows relaxed adjacent element electrical contact requirements for dual-polarized arrangements.",
"title": ""
},
{
"docid": "a9201c32c903eba5cc25a744134a1c3c",
"text": "This paper proposes a new approach to sparsity, called the horseshoe estimator, which arises from a prior based on multivariate-normal scale mixtures. We describe the estimator’s advantages over existing approaches, including its robustness, adaptivity to different sparsity patterns and analytical tractability. We prove two theorems: one that characterizes the horseshoe estimator’s tail robustness and the other that demonstrates a super-efficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using both real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers obtained by Bayesian model averaging under a point-mass mixture prior.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "a9cfb59c0187466d64010a3f39ac0e30",
"text": "Model-free Reinforcement Learning (RL) offers an attractive approach to learn control policies for highdimensional systems, but its relatively poor sample complexity often necessitates training in simulated environments. Even in simulation, goal-directed tasks whose natural reward function is sparse remain intractable for state-of-the-art model-free algorithms for continuous control. The bottleneck in these tasks is the prohibitive amount of exploration required to obtain a learning signal from the initial state of the system. In this work, we leverage physical priors in the form of an approximate system dynamics model to design a curriculum for a model-free policy optimization algorithm. Our Backward Reachability Curriculum (BaRC) begins policy training from states that require a small number of actions to accomplish the task, and expands the initial state distribution backwards in a dynamically-consistent manner once the policy optimization algorithm demonstrates sufficient performance. BaRC is general, in that it can accelerate training of any model-free RL algorithm on a broad class of goal-directed continuous control MDPs. Its curriculum strategy is physically intuitive, easy-to-tune, and allows incorporating physical priors to accelerate training without hindering the performance, flexibility, and applicability of the model-free RL algorithm. We evaluate our approach on two representative dynamic robotic learning problems and find substantial performance improvement relative to previous curriculum generation techniques and naı̈ve exploration strategies.",
"title": ""
},
{
"docid": "e89891b0f04902d01468fa0e2e44f9ac",
"text": "It is a general assumption that pneumatic muscle-type actuators will play an important role in the development of an assistive rehabilitation robotics system. In the last decade, the development of a pneumatic muscle actuated lower-limb leg orthosis has been rather slow compared to other types of actuated leg orthoses that use AC motors, DC motors, pneumatic cylinders, linear actuators, series elastic actuators (SEA) and brushless servomotors. However, recent years have shown that the interest in this field has grown exponentially, mainly due to the demand for a more compliant and interactive human-robotics system. This paper presents a survey of existing lower-limb leg orthoses for rehabilitation, which implement pneumatic muscle-type actuators, such as McKibben artificial muscles, rubbertuators, air muscles, pneumatic artificial muscles (PAM) or pneumatic muscle actuators (PMA). It reviews all the currently existing lower-limb rehabilitation orthosis systems in terms of comparison and evaluation of the design, as well as the control scheme and strategy, with the aim of clarifying the current and on-going research in the lower-limb robotic rehabilitation field.",
"title": ""
},
{
"docid": "5ac2930a623b542cf8ebbea6314c5ef1",
"text": "BACKGROUND\nTelomerase continues to generate substantial attention both because of its pivotal roles in cellular proliferation and aging and because of its unusual structure and mechanism. By replenishing telomeric DNA lost during the cell cycle, telomerase overcomes one of the many hurdles facing cellular immortalization. Functionally, telomerase is a reverse transcriptase, and it shares structural and mechanistic features with this class of nucleotide polymerases. Telomerase is a very unusual reverse transcriptase because it remains stably associated with its template and because it reverse transcribes multiple copies of its template onto a single primer in one reaction cycle.\n\n\nSCOPE OF REVIEW\nHere, we review recent findings that illuminate our understanding of telomerase. Even though the specific emphasis is on structure and mechanism, we also highlight new insights into the roles of telomerase in human biology.\n\n\nGENERAL SIGNIFICANCE\nRecent advances in the structural biology of telomerase, including high resolution structures of the catalytic subunit of a beetle telomerase and two domains of a ciliate telomerase catalytic subunit, provide new perspectives into telomerase biochemistry and reveal new puzzles.",
"title": ""
},
{
"docid": "c4ee2810b5a799a16e2ea66073719050",
"text": "Recently, Neural Networks have been proven extremely effective in many natural language processing tasks such as sentiment analysis, question answering, or machine translation. Aiming to exploit such advantages in the Ontology Learning process, in this technical report we present a detailed description of a Recurrent Neural Network based system to be used to pursue such goal.",
"title": ""
},
{
"docid": "21c15eb5420a7345cc2900f076b15ca1",
"text": "Prokaryotic CRISPR-Cas genomic loci encode RNA-mediated adaptive immune systems that bear some functional similarities with eukaryotic RNA interference. Acquired and heritable immunity against bacteriophage and plasmids begins with integration of ∼30 base pair foreign DNA sequences into the host genome. CRISPR-derived transcripts assemble with CRISPR-associated (Cas) proteins to target complementary nucleic acids for degradation. Here we review recent advances in the structural biology of these targeting complexes, with a focus on structural studies of the multisubunit Type I CRISPR RNA-guided surveillance and the Cas9 DNA endonuclease found in Type II CRISPR-Cas systems. These complexes have distinct structures that are each capable of site-specific double-stranded DNA binding and local helix unwinding.",
"title": ""
},
{
"docid": "1d5336ce334476a45503e7b73ec025f2",
"text": "The science of complexity is based on a new way of thinking that stands in sharp contrast to the philosophy underlying Newtonian science, which is based on reductionism, determinism, and objective knowledge. This paper reviews the historical development of this new world view, focusing on its philosophical foundations. Determinism was challenged by quantum mechanics and chaos theory. Systems theory replaced reductionism by a scientifically based holism. Cybernetics and postmodern social science showed that knowledge is intrinsically subjective. These developments are being integrated under the header of “complexity science”. Its central paradigm is the multi-agent system. Agents are intrinsically subjective and uncertain about their environment and future, but out of their local interactions, a global organization emerges. Although different philosophers, and in particular the postmodernists, have voiced similar ideas, the paradigm of complexity still needs to be fully assimilated by philosophy. This will throw a new light on old philosophical issues such as relativism, ethics and the role of the subject.",
"title": ""
},
{
"docid": "cf0f9a3d57ace2a9dbd65ac09b08d3e5",
"text": "Prosodic modeling is a core problem in speech synthesis. The key challenge is producing desirable prosody from textual input containing only phonetic information. In this preliminary study, we introduce the concept of “style tokens” in Tacotron, a recently proposed end-to-end neural speech synthesis model. Using style tokens, we aim to extract independent prosodic styles from training data. We show that without annotation data or an explicit supervision signal, our approach can automatically learn a variety of prosodic variations in a purely data-driven way. Importantly, each style token corresponds to a fixed style factor regardless of the given text sequence. As a result, we can control the prosodic style of synthetic speech in a somewhat predictable and globally consistent way.",
"title": ""
},
{
"docid": "57dfc6f8b462512a3a2328f897ea44a6",
"text": "We introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.",
"title": ""
},
{
"docid": "b24babd50bd6c7592e272f387e89953a",
"text": "Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases. Previous sentence level denoise models don’t achieve satisfying performances because they use hard labels which are determined by distant supervision and immutable during training. To this end, we introduce an entity-pair level denoise method which exploits semantic information from correctly labeled entity pairs to correct wrong labels dynamically during training. We propose a joint score function which combines the relational scores based on the entity-pair representation and the confidence of the hard label to obtain a new label, namely a soft label, for certain entity pair. During training, soft labels instead of hard labels serve as gold labels. Experiments on the benchmark dataset show that our method dramatically reduces noisy instances and outperforms the state-of-the-art systems.",
"title": ""
},
{
"docid": "02cd879a83070af9842999c7215e7f92",
"text": "Automatic genre classification of music is an important topic in Music Information Retrieval with many interesting applications. A solution to genre classification would allow for machine tagging of songs, which could serve as metadata for building song recommenders. In this paper, we investigate the following question: Given a song, can we automatically detect its genre? We look at three characteristics of a song to determine its genre: timbre, chord transitions, and lyrics. For each method, we develop multiple data models and apply supervised machine learning algorithms including k-means, k-NN, multi-class SVM and Naive Bayes. We are able to accurately classify 65− 75% of the songs from each genre in a 5-genre classification problem between Rock, Jazz, Pop, Hip-Hop, and Metal music.",
"title": ""
},
{
"docid": "defde14c64f5eecda83cf2a59c896bc0",
"text": "Time series shapelets are discriminative subsequences and their similarity to a time series can be used for time series classification. Since the discovery of time series shapelets is costly in terms of time, the applicability on long or multivariate time series is difficult. In this work we propose Ultra-Fast Shapelets that uses a number of random shapelets. It is shown that Ultra-Fast Shapelets yield the same prediction quality as current state-of-theart shapelet-based time series classifiers that carefully select the shapelets by being by up to three orders of magnitudes. Since this method allows a ultra-fast shapelet discovery, using shapelets for long multivariate time series classification becomes feasible. A method for using shapelets for multivariate time series is proposed and Ultra-Fast Shapelets is proven to be successful in comparison to state-of-the-art multivariate time series classifiers on 15 multivariate time series datasets from various domains. Finally, time series derivatives that have proven to be useful for other time series classifiers are investigated for the shapelet-based classifiers. It is shown that they have a positive impact and that they are easy to integrate with a simple preprocessing step, without the need of adapting the shapelet discovery algorithm.",
"title": ""
},
{
"docid": "8481bf05a0afc1de516d951474fb9d92",
"text": "We propose an approach to Multitask Learning (MTL) to make deep learning models faster and lighter for applications in which multiple tasks need to be solved simultaneously, which is particularly useful in embedded, real-time systems. We develop a multitask model for both Object Detection and Semantic Segmentation and analyze the challenges that appear during its training. Our multitask network is 1.6x faster, lighter and uses less memory than deploying the single-task models in parallel. We conclude that MTL has the potential to give superior performance in exchange of a more complex training process that introduces challenges not present in single-task models.",
"title": ""
},
{
"docid": "6c08b9488b5f5c7e4b91d2b8941a9ced",
"text": "Modern affiliate marketing networks provide an infrastructure for connecting merchants seeking customers with independent marketers (affiliates) seeking compensation. This approach depends on Web cookies to identify, at checkout time, which affiliate should receive a commission. Thus, scammers ``stuff'' their own cookies into a user's browser to divert this revenue. This paper provides a measurement-based characterization of cookie-stuffing fraud in online affiliate marketing. We use a custom-built Chrome extension, AffTracker, to identify affiliate cookies and use it to gather data from hundreds of thousands of crawled domains which we expect to be targeted by fraudulent affiliates. Overall, despite some notable historical precedents, we found cookie-stuffing fraud to be relatively scarce in our data set. Based on what fraud we detected, though, we identify which categories of merchants are most targeted and which third-party affiliate networks are most implicated in stuffing scams. We find that large affiliate networks are targeted significantly more than merchant-run affiliate programs. However, scammers use a wider range of evasive techniques to target merchant-run affiliate programs to mitigate the risk of detection suggesting that in-house affiliate programs enjoy stricter policing.",
"title": ""
},
{
"docid": "b5238bfae025d46647526229dd5e00dd",
"text": "Influences of discharge voltage on wheat seed vitality were investigated in a dielectric barrier discharge (DBD) plasma system at atmospheric pressure and temperature. Six different treatments were designed, and their discharge voltages were 0.0, 9.0, 11.0, 13.0, 15.0, and 17.0 kV, respectively. Fifty seeds were exposed to the DBD plasma atmosphere with an air flow rate of 1.5 L min-1 for 4 min in each treatment, and then the DBD plasma-treated seeds were prepared for germination in several Petri dishes. Each treatment was repeated three times. Germination indexes, growth indexes, surface topography, water uptake, permeability, and α-amylase activity were measured. DBD plasma treatment at appropriate energy levels had positive effects on wheat seed germination and seedling growth. The germination potential, germination index, and vigor index significantly increased by 31.4%, 13.9%, and 54.6% after DBD treatment at 11.0 kV, respectively, in comparison to the control. Shoot length, root length, dry weight, and fresh weight also significantly increased after the DBD plasma treatment. The seed coat was softened and cracks were observed, systematization of the protein was strengthened, and amount of free starch grain increased after the DBD plasma treatment. Water uptake, relative electroconductivity, soluble protein, and α-amylase activity of the wheat seed were also significantly improved after the DBD plasma treatment. Roles of active species and ultraviolet radiation generated in the DBD plasma process in wheat seed germination and seedling growth are proposed. Bioelectromagnetics. 39:120-131, 2018. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "4ec7480aeb1b3193d760d554643a1660",
"text": "The ability to learn is arguably the most crucial aspect of human intelligence. In reinforcement learning, we attempt to formalize a certain type of learning that is based on rewards and penalties. These supervisory signals should guide an agent to learn optimal behavior. In particular, this research focuses on deep reinforcement learning, where the agent should learn to play video games solely from pixel input. This thesis contributes to deep reinforcement learning research by assessing several variations to an existing state-of-the-art algorithm. First, we provide an extensive analysis on how the design decisions of the agent’s deep neural network affect its performance. Second, we introduce a novel neural layer that allows for local specializations in the visual input of the agents, as opposed to the global weight sharing that occurs in convolutional layers. Third, we introduce a ‘what’ and ‘where’ neural network architecture, inspired by the information flow of the visual cortical areas in the human brain. Finally, we explore prototype based deep reinforcement learning by introducing a novel output layer that is largely inspired by learning vector quantization. In a subset of our experiments, we show substantial improvements compared to existing alternatives.",
"title": ""
},
{
"docid": "7490e0039b8060ec1a4c27405a20a513",
"text": "Trajectories obtained from GPS-enabled taxis grant us an opportunity to not only extract meaningful statistics, dynamics and behaviors about certain urban road users, but also to monitor adverse and/or malicious events. In this paper we focus on the problem of detecting anomalous routes by comparing against historically “normal” routes. We propose a real-time method, iBOAT, that is able to detect anomalous trajectories “on-the-fly”, as well as identify which parts of the trajectory are responsible for its anomalousness. We evaluate our method on a large dataset of taxi GPS logs and verify that it has excellent accuracy (AUC ≥ 0.99) and overcomes many of the shortcomings of other state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
c67c4c835030ccf135395648b6091073
|
An Empirical Comparison of Four Text Mining Methods
|
[
{
"docid": "d319a17ad2fa46e0278e0b0f51832f4b",
"text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.",
"title": ""
},
{
"docid": "0f58d491e74620f43df12ba0ec19cda8",
"text": "Latent Dirichlet allocation (LDA) (Blei, Ng, Jordan 2003) is a fully generative statistical language model on the content and topics of a corpus of documents. In this paper we apply a modification of LDA, the novel multi-corpus LDA technique for web spam classification. We create a bag-of-words document for every Web site and run LDA both on the corpus of sites labeled as spam and as non-spam. In this way collections of spam and non-spam topics are created in the training phase. In the test phase we take the union of these collections, and an unseen site is deemed spam if its total spam topic probability is above a threshold. As far as we know, this is the first web retrieval application of LDA. We test this method on the UK2007-WEBSPAM corpus, and reach a relative improvement of 11% in F-measure by a logistic regression based combination with strong link and content baseline classifiers.",
"title": ""
}
] |
[
{
"docid": "6d570aabfbf4f692fc36a0ef5151a469",
"text": "Background: Balance is a component of basic needs for daily activities and it plays an important role in static and dynamic activities. Core stabilization training is thought to improve balance, postural control, and reduce the risk of lower extremity injuries. The purpose of this study was to study the effect of core stabilizing program on balance in spastic diplegic cerebral palsy children. Subjects and Methods: Thirty diplegic cerebral palsy children from both sexes ranged in age from six to eight years participated in this study. They were assigned randomly into two groups of equal numbers, control group (A) children were received selective therapeutic exercises and study group (B) children were received selective therapeutic exercises plus core stabilizing program for eight weeks. Each patient of the two groups was evaluated before and after treatment by Biodex Balance System in laboratory of balance in faculty of physical therapy (antero posterior, medio lateral and overall stability). Patients in both groups received traditional physical therapy program for one hour per day and three sessions per week and group (B) were received core stabilizing program for eight weeks three times per week. Results: There was no significant difference between the two groups in all measured variables before wearing the orthosis (p>0.05), while there was significant difference when comparing pre and post mean values of all measured variables in each group (p<0.01). When comparing post mean values between both groups, the results revealed significant improvement in favor of group (B) (p<0.01). Conclusion: core stabilizing program is an effective therapeutic exercise to improve balance in diplegic cerebral palsy children.",
"title": ""
},
{
"docid": "ae8fde6c520fb4d1e18c4ff19d59a8d8",
"text": "Visual-to-auditory Sensory Substitution Devices (SSDs) are non-invasive sensory aids that provide visual information to the blind via their functioning senses, such as audition. For years SSDs have been confined to laboratory settings, but we believe the time has come to use them also for their original purpose of real-world practical visual rehabilitation. Here we demonstrate this potential by presenting for the first time new features of the EyeMusic SSD, which gives the user whole-scene shape, location & color information. These features include higher resolution and attempts to overcome previous stumbling blocks by being freely available to download and run from a smartphone platform. We demonstrate with use the EyeMusic the potential of SSDs in noisy real-world scenarios for tasks such as identifying and manipulating objects. We then discuss the neural basis of using SSDs, and conclude by discussing other steps-in-progress on the path to making their practical use more widespread.",
"title": ""
},
{
"docid": "30f4dfd49f1ba53f3a4786ae60da3186",
"text": "In order to improve the speed limitation of serial scrambler, we propose a new parallel scrambler architecture and circuit to overcome the limitation of serial scrambler. A very systematic parallel scrambler design methodology is first proposed. The critical path delay is only one D-register and one xor gate of two inputs. Thus, it is superior to other proposed circuits in high-speed applications. A new DET D-register with embedded xor operation is used as a basic circuit block of the parallel scrambler. Measurement results show the proposed parallel scrambler can operate in 40 Gbps with 16 outputs in TSMC 0.18-/spl mu/m CMOS process.",
"title": ""
},
{
"docid": "928ed1aed332846176ad52ce7cc0754c",
"text": "What is the price of anarchy when unsplittable demands are ro uted selfishly in general networks with load-dependent edge dela ys? Motivated by this question we generalize the model of [14] to the case of weighted congestion games. We show that varying demands of users crucially affect the n ature of these games, which are no longer isomorphic to exact potential gam es, even for very simple instances. Indeed we construct examples where even a single-commodity (weighted) network congestion game may have no pure Nash equ ilibrium. On the other hand, we study a special family of networks (whic h we call the l-layered networks ) and we prove that any weighted congestion game on such a network with resource delays equal to the congestions, pos sesses a pure Nash Equilibrium. We also show how to construct one in pseudo-pol yn mial time. Finally, we give a surprising answer to the question above for s uch games: The price of anarchy of any weighted l-layered network congestion game with m edges and edge delays equal to the loads, is Θ (",
"title": ""
},
{
"docid": "dde4e45fd477808d40b3b06599d361ff",
"text": "In this paper, we present the basic features of the flight control of the SkySails towing kite system. After introducing the coordinate definitions and the basic system dynamics, we introduce a novel model used for controller design and justify its main dynamics with results from system identification based on numerous sea trials. We then present the controller design, which we successfully use for operational flights for several years. Finally, we explain the generation of dynamical flight patterns.",
"title": ""
},
{
"docid": "7a005d66591330d6fdea5ffa8cb9020a",
"text": "First impressions influence the behavior of people towards a newly encountered person or a human-like agent. Apart from the physical characteristics of the encountered face, the emotional expressions displayed on it, as well as ambient information affect these impressions. In this work, we propose an approach to predict the first impressions people will have for a given video depicting a face within a context. We employ pre-trained Deep Convolutional Neural Networks to extract facial expressions, as well as ambient information. After video modeling, visual features that represent facial expression and scene are combined and fed to Kernel Extreme Learning Machine regressor. The proposed system is evaluated on the ChaLearn Challenge Dataset on First Impression Recognition, where the classification target is the ”Big Five” personality trait labels for each video. Our system achieved an accuracy of 90.94% on the sequestered test set, 0.36% points below the top system in the competition.",
"title": ""
},
{
"docid": "309e14c07a3a340f7da15abeb527231d",
"text": "The random forest algorithm, proposed by L. Breiman in 2001, has been extremely successful as a general-purpose classification and regression method. The approach, which combines several randomized decision trees and aggregates their predictions by averaging, has shown excellent performance in settings where the number of variables is much larger than the number of observations. Moreover, it is versatile enough to be applied to large-scale problems, is easily adapted to various ad-hoc learning tasks, and returns measures of variable importance. The present article reviews the most recent theoretical and methodological developments for random forests. Emphasis is placed on the mathematical forces driving the algorithm, with special attention given to the selection of parameters, the resampling mechanism, and variable importance measures. This review is intended to provide non-experts easy access to the main ideas.",
"title": ""
},
{
"docid": "37860036f1b9926a8d46d6542a6688f2",
"text": "A three-dimensional extended finite element method (X-FEM) coupled with a narrow band fast marching method (FMM) is developed and implemented in the Abaqus finite element package for curvilinear fatigue crack growth and life prediction analysis of metallic structures. Given the level set representation of arbitrary crack geometry, the narrow band FMM provides an efficient way to update the level set values of its evolving crack front. In order to capture the plasticity induced crack closure effect, an element partition and state recovery algorithm for dynamically allocated Gauss points is adopted for efficient integration of historical state variables in the near-tip plastic zone. An element-based penalty approach is also developed to model crack closure and friction. The proposed technique allows arbitrary insertion of initial cracks, independent of a base 3D model, and allows non-self-similar crack growth pattern without conforming to the existing mesh or local remeshing. Several validation examples are presented to demonstrate the extraction of accurate stress intensity factors for both static and growing cracks. Fatigue life prediction of a flawed helicopter lift frame under the ASTERIX spectrum load is presented to demonstrate the analysis procedure and capabilities of the method.",
"title": ""
},
{
"docid": "d040683d793e79732fb6c471f098a022",
"text": "In this work we address the issue of sustainable cities by focusing on one of their very central components: daily mobility. Indeed, if cities can be interpreted as spatial organizations allowing social interactions, the number of daily movements needed to reach this goal is continuously increasing. Therefore, improving urban accessibility merely results in increasing traffic and its negative externalities (congestion, accidents, pollution, noise, etc.), while eventually reducing the quality of life of people in the city. This is why several urban-transport policies are implemented in order to reduce individual mobility impacts while maintaining equitable access to the city. This challenge is however non-trivial and therefore we propose to investigate this issue from the complex systems point of view. The real spatial-temporal urban accessibility of citizens cannot be approximated just by focusing on space and implies taking into account the space-time activity patterns of individuals, in a more dynamic way. Thus, given the importance of local interactions in such a perspective, an agent based approach seems to be a relevant solution. This kind of individual based and “interactionist” approach allows us to explore the possible impact of individual behaviors on the overall dynamics of the city but also the possible impact of global measures on individual behaviors. In this paper, we give an overview of the Miro Project and then focus on the GaMiroD model design from real data analysis to model exploration tuned by transportation-oriented scenarios. Among them, we start with the the impact of a LEZ (Low Emission Zone) in the city center.",
"title": ""
},
{
"docid": "bd1a13c94d0e12b4ba9f14fef47d2564",
"text": "Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f = u+ η, and η is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle’s projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article1.",
"title": ""
},
{
"docid": "2dd9bb2536fdc5e040544d09fe3dd4fa",
"text": "Low 1/f noise, low-dropout (LDO) regulators are becoming critical for the supply regulation of deep-submicron analog baseband and RF system-on-chip designs. A low-noise, high accuracy LDO regulator (LN-LDO) utilizing a chopper stabilized error amplifier is presented. In order to achieve fast response during load transients, a current-mode feedback amplifier (CFA) is designed as a second stage driving the regulation FET. In order to reduce clock feed-through and 1/f noise accumulation at the chopping frequency, a first-order digital SigmaDelta noise-shaper is used for chopping clock spectral spreading. With up to 1 MHz noise-shaped modulation clock, the LN-LDO achieves a noise spectral density of 32 nV/radic(Hz) and a PSR of 38 dB at 100 kHz. The proposed LDO is shown to reduce the phase noise of an integrated 32 MHz temperature compensated crystal oscillator (TCXO) at 10 kHz offset by 15 dB. Due to reduced 1/f noise requirements, the error amplifier silicon area is reduced by 75%, and the overall regulator area is reduced by 50% with respect to an equivalent noise static regulator. The current-mode feedback second stage buffer reduces regulator settling time by 60% in comparison to an equivalent power consumption voltage mode buffer, achieving 0.6 mus settling time for a 25-mA load step. The LN-LDO is designed and fabricated on a 0.25 mum CMOS process with five layers of metal, occupying 0.88 mm2.",
"title": ""
},
{
"docid": "8e23dc265f4d48caae7a333db72d887e",
"text": "We introduce a new mechanism for rooting trust in a cloud computing environment called the Trusted Virtual Environment Module (TVEM). The TVEM helps solve the core security challenge of cloud computing by enabling parties to establish trust relationships where an information owner creates and runs a virtual environment on a platform owned by a separate service provider. The TVEM is a software appliance that provides enhanced features for cloud virtual environments over existing Trusted Platform Module virtualization techniques, which includes an improved application program interface, cryptographic algorithm flexibility, and a configurable modular architecture. We define a unique Trusted Environment Key that combines trust from the information owner and the service provider to create a dual root of trust for the TVEM that is distinct for every virtual environment and separate from the platform’s trust. This paper presents the requirements, design, and architecture of our approach.",
"title": ""
},
{
"docid": "5e7b7df188ab7983a7e364c50926c58c",
"text": "Dopamine-β-hydroxylase (DBH, EC 1.14.17.1) is an enzyme with implications in various neuropsychiatric and cardiovascular diseases and is a known drug target. There is a dearth of cost effective and fast method for estimation of activity of this enzyme. A sensitive UHPLC based method for the estimation of DBH activity in human sera samples based on separation of substrate tyramine from the product octopamine in 3 min is described here. In this newly developed protocol, a Solid Phase Extraction (SPE) sample purification step prior to LC separation, selectively removes interferences from the reaction cocktail with almost no additional burden on analyte recovery. The response was found to be linear with an r2 = 0.999. The coefficient of variation for assay precision was < 10% and recovery > 90%. As a proof of concept, DBH activity in sera from healthy human volunteers (n = 60) and schizophrenia subjects (n = 60) were successfully determined using this method. There was a significant decrease in sera DBH activity in subjects affected by schizophrenia (p < 0.05) as compared to healthy volunteers. This novel assay employing SPE to separate octopamine and tyramine from the cocktail matrix may have implications for categorising subjects into various risk groups for Schizophrenia, Parkinson’s disease as well as in high throughput screening of inhibitors.",
"title": ""
},
{
"docid": "58d2f5d181095fc59eaf9c7aa58405b0",
"text": "Principle objective of Image enhancement is to process an image so that result is more suitable than original image for specific application. Digital image enhancement techniques provide a multitude of choices for improving the visual quality of images. A frequency domain smoothingsharpening technique is proposed and its impact is assessed to beneficially enhance mammogram images. This technique aims to gain the advantages of enhance and sharpening process that aims to highlight sudden changes in the image intensity, it is usually applied to remove random noise from digital images. The already developed technique also eliminates the drawbacks of each of the two sharpening and smoothing techniques resulting from their individual application in image processing field. The selection of parameters is almost invariant of the type of background tissues and severity of the abnormality, giving significantly improved results even for denser mammographic images. The proposed technique is tested breast X-ray mammograms. The simulated results show that the high potential to advantageously enhance the image contrast hence giving extra aid to radiologists to detect and classify mammograms of breast cancer. Keywords— Fourier transform, Gabor filter, Image, enhancement, Mammograms, Segmentation",
"title": ""
},
{
"docid": "d7a5eedd87637a266293595a6f2b924f",
"text": "Regular Expression (RE) matching has important applications in the areas of XML content distribution and network security. In this paper, we present the end-to-end design of a high performance RE matching system. Our system combines the processing efficiency of Deterministic Finite Automata (DFA) with the space efficiency of Non-deterministic Finite Automata (NFA) to scale to hundreds of REs. In experiments with real-life RE data on data streams, we found that a bulk of the DFA transitions are concentrated around a few DFA states. We exploit this fact to cache only the frequent core of each DFA in memory as opposed to the entire DFA (which may be exponential in size). Further, we cluster REs such that REs whose interactions cause an exponential increase in the number of states are assigned to separate groups -- this helps to improve cache hits by controlling the overall DFA size.\n To the best of our knowledge, ours is the first end-to-end system capable of matching REs at high speeds and in their full generality. Through a clever combination of RE grouping, and static and dynamic caching, it is able to perform RE matching at high speeds, even in the presence of limited memory. Through experiments with real-life data sets, we show that our RE matching system convincingly outperforms a state-of-the-art Network Intrusion Detection tool with support for efficient RE matching.",
"title": ""
},
{
"docid": "6b78a4b493e67dc367710a0cbd9e313b",
"text": "The identification of glandular tissue in breast X-rays (mammograms) is important both in assessing asymmetry between left and right breasts, and in estimating the radiation risk associated with mammographic screening. The appearance of glandular tissue in mammograms is highly variable, ranging from sparse streaks to dense blobs. Fatty regions are generally smooth and dark. Texture analysis provides a flexible approach to discriminating between glandular and fatty regions. We have performed a series of experiments investigating the use of granulometry and texture energy to classify breast tissue. Results of automatic classifications have been compared with a consensus annotation provided by two expert breast radiologists. On a set of 40 mammograms, a correct classification rate of 80% has been achieved using texture energy analysis.",
"title": ""
},
{
"docid": "3131a4b458e88b64271b05f5a4be1654",
"text": "They help identify and predict individual, as well as aggregate, behavior, as illustrated by four application domains: direct mail, retail, automobile insurance, and health care.",
"title": ""
},
{
"docid": "d53b8e8ad3365498e0036044c0b9d51e",
"text": "With the rise in global energy demand and environmental concerns about the use of fossil fuels, the need for rapid development of alternative fuels from sustainable, non-food sources is now well acknowledged. The effective utilization of low-cost high-volume agricultural and forest biomass for the production of transportation fuels and bio-based materials will play a vital role in addressing this concern [1]. The processing of lignocellulosic biomass, especially from mixed agricultural and forest sources with varying composition, is currently significantly more challenging than the bioconversion of corn starch or cane sugar to ethanol [1,2]. This is due to the inherent recalcitrance of lignocellulosic biomass to enzymatic and microbial deconstruction, imparted by the partly crystalline nature of cellulose and its close association with hemicellulose and lignin in the plant cell wall [2,3]. Pretreatments that convert raw lignocellulosic biomass to a form amenable to enzymatic degradation are therefore an integral step in the production of bioethanol from this material [4]. Chemical or thermochemical pretreatments act to reduce biomass recalcitrance in various ways. These include hemicellulose removal or degradation, lignin modification and/or delignification, reduction in crystallinity and degree of polymerization of cellulose, and increasing pore volume. Biomass pretreatments are an active focus of industrial and academic research efforts, and various strategies have been developed. Among commonly studied pretreatments, organosolv pretreatment, in which an aqueous organic solvent mixture is used as the pretreatment medium, results in the fractionation of the major biomass components, cellulose, lignin, and hemicellulose into three process streams [5,6]. Cellulose and lignin are recovered as separate solid streams, while hemicelluloses and sugar degradation products such as furfural and hydroxymethylfurfural (HMF) are released as a water-soluble fraction. The combination of ethanol as the solvent and",
"title": ""
},
{
"docid": "28fe178710bfa6487a7919312a854f7e",
"text": "This paper investigates the maximal channel coding rate achievable at a given blocklength and error probability. For general classes of channels new achievability and converse bounds are given, which are tighter than existing bounds for wide ranges of parameters of interest, and lead to tight approximations of the maximal achievable rate for blocklengths n as short as 100. It is also shown analytically that the maximal rate achievable with error probability ¿ isclosely approximated by C - ¿(V/n) Q-1(¿) where C is the capacity, V is a characteristic of the channel referred to as channel dispersion , and Q is the complementary Gaussian cumulative distribution function.",
"title": ""
},
{
"docid": "ffadf882ac55d9cb06b77b3ce9a6ad8c",
"text": "Three experimental techniques based on automatic swept-frequency network and impedance analysers were used to measure the dielectric properties of tissue in the frequency range 10 Hz to 20 GHz. The technique used in conjunction with the impedance analyser is described. Results are given for a number of human and animal tissues, at body temperature, across the frequency range, demonstrating that good agreement was achieved between measurements using the three pieces of equipment. Moreover, the measured values fall well within the body of corresponding literature data.",
"title": ""
}
] |
scidocsrr
|
492b93c814e35c4f7ac925ca8fdd6985
|
Consensus Protocols for Networks of Dynamic Agents
|
[
{
"docid": "4c290421dc42c3a5a56c7a4b373063e5",
"text": "In this paper, we provide a graph theoretical framework that allows us to formally define formations of multiple vehicles and the issues arising in uniqueness of graph realizations and its connection to stability of formations. The notion of graph rigidity is crucial in identifying the shape variables of a formation and an appropriate potential function associated with the formation. This allows formulation of meaningful optimization or nonlinear control problems for formation stabilization/tacking, in addition to formal representation of split, rejoin, and reconfiguration maneuvers for multi-vehicle formations. We introduce an algebra that consists of performing some basic operations on graphs which allow creation of larger rigidby-construction graphs by combining smaller rigid subgraphs. This is particularly useful in performing and representing rejoin/split maneuvers of multiple formations in a distributed fashion.",
"title": ""
}
] |
[
{
"docid": "057069a06621b879f88c6d09f8867f77",
"text": "Nowadays, the railway industry is in a position where it is able to exploit the opportunities created by the IIoT (Industrial Internet of Things) and enabling communication technologies under the paradigm of Internet of Trains. This review details the evolution of communication technologies since the deployment of GSM-R, describing the main alternatives and how railway requirements, specifications and recommendations have evolved over time. The advantages of the latest generation of broadband communication systems (e.g., LTE, 5G, IEEE 802.11ad) and the emergence of Wireless Sensor Networks (WSNs) for the railway environment are also explained together with the strategic roadmap to ensure a smooth migration from GSM-R. Furthermore, this survey focuses on providing a holistic approach, identifying scenarios and architectures where railways could leverage better commercial IIoT capabilities. After reviewing the main industrial developments, short and medium-term IIoT-enabled services for smart railways are evaluated. Then, it is analyzed the latest research on predictive maintenance, smart infrastructure, advanced monitoring of assets, video surveillance systems, railway operations, Passenger and Freight Information Systems (PIS/FIS), train control systems, safety assurance, signaling systems, cyber security and energy efficiency. Overall, it can be stated that the aim of this article is to provide a detailed examination of the state-of-the-art of different technologies and services that will revolutionize the railway industry and will allow for confronting today challenges.",
"title": ""
},
{
"docid": "37e82a54df827ddcfdb71fef7c12a47b",
"text": "We tackle a task where an agent learns to navigate in a 2D maze-like environment called XWORLD. In each session, the agent perceives a sequence of raw-pixel frames, a natural language command issued by a teacher, and a set of rewards. The agent learns the teacher’s language from scratch in a grounded and compositional manner, such that after training it is able to correctly execute zero-shot commands: 1) the combination of words in the command never appeared before, and/or 2) the command contains new object concepts that are learned from another task but never learned from navigation. Our deep framework for the agent is trained end to end: it learns simultaneously the visual representations of the environment, the syntax and semantics of the language, and the action module that outputs actions. The zero-shot learning capability of our framework results from its compositionality and modularity with parameter tying. We visualize the intermediate outputs of the framework, demonstrating that the agent truly understands how to solve the problem. We believe that our results provide some preliminary insights on how to train an agent with similar abilities in a 3D environment.",
"title": ""
},
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "2e40cdb0416198c1ec986e0d3da47fd1",
"text": "The slotted-page structure is a database page format commonly used for managing variable-length records. In this work, we develop a novel \"failure-atomic slotted page structure\" for persistent memory that leverages byte addressability and durability of persistent memory to minimize redundant write operations used to maintain consistency in traditional database systems. Failure-atomic slotted paging consists of two key elements: (i) in-place commit per page using hardware transactional memory and (ii) slot header logging that logs the commit mark of each page. The proposed scheme is implemented in SQLite and compared against NVWAL, the current state-of-the-art scheme. Our performance study shows that our failure-atomic slotted paging shows optimal performance for database transactions that insert a single record. For transactions that touch more than one database page, our proposed slot-header logging scheme minimizes the logging overhead by avoiding duplicating pages and logging only the metadata of the dirty pages. Overall, we find that our failure-atomic slotted-page management scheme reduces database logging overhead to 1/6 and improves query response time by up to 33% compared to NVWAL.",
"title": ""
},
{
"docid": "6ce28e4fe8724f685453a019f253b252",
"text": "This paper is focused on receivables management and possibilities how to use available information technologies. The use of information technologies should make receivables management easier on one hand and on the other hand it makes the processes more efficient. Finally it decreases additional costs and losses connected with enforcing receivables when defaulting debts occur. The situation of use of information technologies is different if the subject is financial or nonfinancial institution. In the case of financial institution loans providing is core business and the processes and their technical support are more sophisticated than in the case of non-financial institutions whose loan providing as invoices is just a supplement to their core business activities. The paper shows use of information technologies in individual cases but it also emphasizes the use of general results for further decision making process. Results of receivables management are illustrated on the data of the Czech Republic.",
"title": ""
},
{
"docid": "6ac996c20f036308f36c7b667babe876",
"text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.",
"title": ""
},
{
"docid": "f4aa06f7782a22eeb5f30d0ad27eaff9",
"text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.",
"title": ""
},
{
"docid": "34382f9716058d727f467716350788a7",
"text": "The structure of the brain and the nature of evolution suggest that, despite its uniqueness, language likely depends on brain systems that also subserve other functions. The declarative/procedural (DP) model claims that the mental lexicon of memorized word-specific knowledge depends on the largely temporal-lobe substrates of declarative memory, which underlies the storage and use of knowledge of facts and events. The mental grammar, which subserves the rule-governed combination of lexical items into complex representations, depends on a distinct neural system. This system, which is composed of a network of specific frontal, basal-ganglia, parietal and cerebellar structures, underlies procedural memory, which supports the learning and execution of motor and cognitive skills, especially those involving sequences. The functions of the two brain systems, together with their anatomical, physiological and biochemical substrates, lead to specific claims and predictions regarding their roles in language. These predictions are compared with those of other neurocognitive models of language. Empirical evidence is presented from neuroimaging studies of normal language processing, and from developmental and adult-onset disorders. It is argued that this evidence supports the DP model. It is additionally proposed that \"language\" disorders, such as specific language impairment and non-fluent and fluent aphasia, may be profitably viewed as impairments primarily affecting one or the other brain system. Overall, the data suggest a new neurocognitive framework for the study of lexicon and grammar.",
"title": ""
},
{
"docid": "d83e03beb3ca6e9b02848fd8ad94591e",
"text": "Smartphones and tablets are becoming less expensive and many students already bring them to classes. The increased availability of smartphones and tablets with Internet connectivity and increasing power computing makes possible the use of augmented reality (AR) applications in these mobile devices. This makes it possible for a teacher to develop educational activities that can take advantage of the augmented reality technologies for improving learning activities. The use of information technology made many changes in the way of teaching and learning. We believe that the use of augmented reality will change significantly the teaching activities by enabling the addition of supplementary information that is seen on a mobile device. In this paper, we present several educational activities created using free augmented reality tools that do not require programming knowledge to be used by any teacher. We cover the marker and marker less based augmented reality technologies to show how we can create learning activities to visualize augmented information like animations and 3D objects that help students understand the educational content. There are currently many augmented reality applications. We looked to the most popular augmented-reality eco-systems. Our purpose was to find AR systems that can be used in daily learning activities. For this reason, they must be user friendly, since they are going to be used by teachers that in general do not have programming knowledge. Additionally, we were interested in using augmented reality applications that are open source or free.",
"title": ""
},
{
"docid": "101ecfb3d6a20393d147cd2061414369",
"text": "In this paper we propose a novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time. Our approach generates a textured triangle mesh from a signed distance function that it continuously updates as new RGB-D images arrive. We propose to use an octree as the primary data structure which allows us to represent the scene at multiple scales. Furthermore, it allows us to grow the reconstruction volume dynamically. As most space is either free or unknown, we allocate and update only those voxels that are located in a narrow band around the observed surface. In contrast to a regular grid, this approach saves enormous amounts of memory and computation time. The major challenge is to generate and maintain a consistent triangle mesh, as neighboring cells in the octree are more difficult to find and may have different resolutions. To remedy this, we present in this paper a novel algorithm that keeps track of these dependencies, and efficiently updates corresponding parts of the triangle mesh. In our experiments, we demonstrate the real-time capability on a large set of RGB-D sequences. As our approach does not require a GPU, it is well suited for applications on mobile or flying robots with limited computational resources.",
"title": ""
},
{
"docid": "624e78153b58a69917d313989b72e6bf",
"text": "In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective Particle Swarm Optimization (TV-MOPSO). TV-MOPSO is made adaptive in nature by allowing its vital parameters (viz., inertia weight and acceleration coefficients) to change with iterations. This adaptiveness helps the algorithm to explore the search space more efficiently. A new diversity parameter has been used to ensure sufficient diversity amongst the solutions of the non-dominated fronts, while retaining at the same time the convergence to the Pareto-optimal front. TV-MOPSO has been compared with some recently developed multi-objective PSO techniques and evolutionary algorithms for 11 function optimization problems, using different performance measures. 2007 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "bba979cd5d69dac380ba1023441460d3",
"text": "This paper presents a model of a particular class of a convertible MAV with fixed wings. This vehicle can operate as a helicopter as well as a conventional airplane, i.e. the aircraft is able to switch their flight configuration from hover to level flight and vice versa by means of a transition maneuver. The paper focuses on finding a controller capable of performing such transition via the tilting of their four rotors. The altitude should remain on a predefined value throughout the transition stage. For this purpose a nonlinear control strategy based on saturations and Lyapunov design is given. The use of this control law enables to make the transition maneuver while maintaining the aircraft in flight. Numerical results are presented, showing the effectiveness of the proposed methodology to deal with the transition stage.",
"title": ""
},
{
"docid": "1b78fd9e2d90393ee877c49f582d23ee",
"text": "Many “big data” applications need to act on data arriving in real time. However, current programming models for distributed stream processing are relatively low-level, often leaving the user to worry about consistency of state across the system and fault recovery. Furthermore, the models that provide fault recovery do so in an expensive manner, requiring either hot replication or long recovery times. We propose a new programming model, discretized streams (D-Streams), that offers a high-level functional API, strong consistency, and efficient fault recovery. D-Streams support a new recovery mechanism that improves efficiency over the traditional replication and upstream backup schemes in streaming databases— parallel recovery of lost state—and unlike previous systems, also mitigate stragglers. We implement D-Streams as an extension to the Spark cluster computing engine that lets users seamlessly intermix streaming, batch and interactive queries. Our system can process over 60 million records/second at sub-second latency on 100 nodes.",
"title": ""
},
{
"docid": "118738ca4b870e164c7be53e882a9ab4",
"text": "IA. Cause and Effect . . . . . . . . . . . . . . 465 1.2. Prerequisites of Selforganization . . . . . . . 467 1.2.3. Evolut ion Must S ta r t f rom R andom Even ts 467 1.2.2. Ins t ruc t ion Requires In format ion . . . . 467 1.2.3. In format ion Originates or Gains Value by S e l e c t i o n . . . . . . . . . . . . . . . 469 1.2.4. Selection Occurs wi th Special Substances under Special Conditions . . . . . . . . 470",
"title": ""
},
{
"docid": "6c584b512e51b3dd4f16a9c753ac2fc5",
"text": "Cloud computing and virtualization technologies play important roles in modern service-oriented computing paradigm. More conventional services are being migrated to virtualized computing environments to achieve flexible deployment and high availability. We introduce a schedule algorithm based on fuzzy inference system (FIS), for global container resource allocation by evaluating nodes' statuses using FIS. We present the approaches to build containerized test environment and validates the effectiveness of the resource allocation policies by running sample use cases. Experiment results show that the presented infrastructure and schema derive optimal resource configurations and significantly improves the performance of the cluster.",
"title": ""
},
{
"docid": "d36021ff647a2f2c74dd35a847847a09",
"text": "An ontology is a crucial factor for the success of the Semantic Web and other knowledge-based systems in terms of share and reuse of domain knowledge. However, there are a few concrete ontologies within actual knowledge domains including learning domains. In this paper, we develop an ontology which is an explicit formal specification of concepts and semantic relations among them in philosophy. We call it a philosophy ontology. Our philosophy is a formal specification of philosophical knowledge including knowledge of contents of classical texts of philosophy. We propose a methodology, which consists of detailed guidelines and templates, for constructing text-based ontology. Our methodology consists of 3 major steps and 14 minor steps. To implement the philosophy ontology, we develop an ontology management system based on Topic Maps. Our system includes a semi-automatic translator for creating Topic Map documents from the output of conceptualization steps and other tools to construct, store, retrieve ontologies based on Topic Maps. Our methodology and tools can be applied to other learning domain ontologies, such as history, literature, arts, and music. © 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "29cceb730e663c08e20107b6d34ced8b",
"text": "Cumulative citation recommendation refers to the task of filtering a time-ordered corpus for documents that are highly relevant to a predefined set of entities. This task has been introduced at the TREC Knowledge Base Acceleration track in 2012, where two main families of approaches emerged: classification and ranking. In this paper we perform an experimental comparison of these two strategies using supervised learning with a rich feature set. Our main finding is that ranking outperforms classification on all evaluation settings and metrics. Our analysis also reveals that a ranking-based approach has more potential for future improvements.",
"title": ""
},
{
"docid": "a30d9dbac3f0d988fd15884cda3ecf93",
"text": "In this review article, the authors have summarized the published literature supporting the value of video game use on the following topics: improvement of cognitive functioning in older individuals, potential reasons for the positive effects of video game use in older age, and psychological factors related to using video games in older age. It is important for geriatric researchers and practitioners to identify approaches and interventions that minimize the negative effects of the various changes that occur within the aging body. Generally speaking, biological aging results in a decline of both physical and cognitive functioning.1–3 However, a growing body of literature indicates that taking part in physically and/or mentally stimulating activities may contribute to the maintenance of cognitive abilities and even lead to acquiring cognitive gains.4 It is important to identify ways to induce cognitive improvements in older age, especially considering that the population of the United States (U.S.) is aging rapidly, with the number of people age 65 and older expected to increase to almost 84 million by 2050.5 This suggests that there will likely be a rapid escalation in the number of older individuals living with age-related cognitive impairment. It is currently estimated that there are 5.5 million people in the U.S. who have been diagnosed with Alzheimer’s disease,6 which is one of the most common forms of dementia.7 Thus, research aimed at helping older adults maintain good cognitive functioning is highly needed. Due to space limitations, this article is not meant to include all of the available research in this area; it contains mainly supporting evidence on the effects of video game use among older adults. Some opposing evidence is briefly mentioned when covering whether the skills acquired during video game training transfer to non-practiced tasks (which is a particularly controversial topic with ample mixed evidence).",
"title": ""
},
{
"docid": "d49bdbd1d97d663ac1b9db9cb2c28fff",
"text": "BACKGROUND\nPlantar fasciitis (PF) is reported in different sports mainly in running and soccer athletes. Purpose of this study is to conduct a systematic review of published literature concerning the diagnosis and treatment of PF in both recreational and élite athletes. The review was conducted and reported in accordance with the PRISMA statement.\n\n\nMETHODS\nThe following electronic databases were searched: PubMed, Cochrane Library and Scopus. As far as PF diagnosis, we investigated the electronic databases from January 2006 to June 2016, whereas in considering treatments all data in literature were investigated.\n\n\nRESULTS\nFor both diagnosis and treatment, 17 studies matched inclusion criteria. The results have highlighted that the most frequently used diagnostic techniques were Ultrasonography and Magnetic Resonance Imaging. Conventional, complementary, and alternative treatment approaches were assessed.\n\n\nCONCLUSIONS\nIn reviewing literature, we were unable to find any specific diagnostic algorithm for PF in athletes, due to the fact that no different diagnostic strategies were used for athletes and non-athletes. As for treatment, a few literature data are available and it makes difficult to suggest practice guidelines. Specific studies are necessary to define the best treatment algorithm for both recreational and élite athletes.\n\n\nLEVEL OF EVIDENCE\nIb.",
"title": ""
},
{
"docid": "447bfee37117b77534abe2cf6cfd8a17",
"text": "Detailed characterization of the cell types in the human brain requires scalable experimental approaches to examine multiple aspects of the molecular state of individual cells, as well as computational integration of the data to produce unified cell-state annotations. Here we report improved high-throughput methods for single-nucleus droplet-based sequencing (snDrop-seq) and single-cell transposome hypersensitive site sequencing (scTHS-seq). We used each method to acquire nuclear transcriptomic and DNA accessibility maps for >60,000 single cells from human adult visual cortex, frontal cortex, and cerebellum. Integration of these data revealed regulatory elements and transcription factors that underlie cell-type distinctions, providing a basis for the study of complex processes in the brain, such as genetic programs that coordinate adult remyelination. We also mapped disease-associated risk variants to specific cellular populations, which provided insights into normal and pathogenic cellular processes in the human brain. This integrative multi-omics approach permits more detailed single-cell interrogation of complex organs and tissues.",
"title": ""
}
] |
scidocsrr
|
1988ed183f2ffb98927d4ad0aaff64a5
|
Paxos Quorum Leases: Fast Reads Without Sacrificing Writes
|
[
{
"docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0",
"text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.",
"title": ""
},
{
"docid": "f10660b168700e38e24110a575b5aafa",
"text": "While the use of MapReduce systems (such as Hadoop) for large scale data analysis has been widely recognized and studied, we have recently seen an explosion in the number of systems developed for cloud data serving. These newer systems address \"cloud OLTP\" applications, though they typically do not support ACID transactions. Examples of systems proposed for cloud serving use include BigTable, PNUTS, Cassandra, HBase, Azure, CouchDB, SimpleDB, Voldemort, and many others. Further, they are being applied to a diverse range of applications that differ considerably from traditional (e.g., TPC-C like) serving workloads. The number of emerging cloud serving systems and the wide range of proposed applications, coupled with a lack of apples-to-apples performance comparisons, makes it difficult to understand the tradeoffs between systems and the workloads for which they are suited. We present the \"Yahoo! Cloud Serving Benchmark\" (YCSB) framework, with the goal of facilitating performance comparisons of the new generation of cloud data serving systems. We define a core set of benchmarks and report results for four widely used systems: Cassandra, HBase, Yahoo!'s PNUTS, and a simple sharded MySQL implementation. We also hope to foster the development of additional cloud benchmark suites that represent other classes of applications by making our benchmark tool available via open source. In this regard, a key feature of the YCSB framework/tool is that it is extensible--it supports easy definition of new workloads, in addition to making it easy to benchmark new systems.",
"title": ""
}
] |
[
{
"docid": "b4554b814d889806df0a5ff50fb0e0f8",
"text": "Recent work on searching the Semantic Web has yielded a wide range of approaches with respect to the underlying search mechanisms, results management and presentation, and style of input. Each approach impacts upon the quality of the information retrieved and the user’s experience of the search process. However, despite the wealth of experience accumulated from evaluating Information Retrieval (IR) systems, the evaluation of Semantic Web search systems has largely been developed in isolation from mainstream IR evaluation with a far less unified approach to the design of evaluation activities. This has led to slow progress and low interest when compared to other established evaluation series, such as TREC for IR or OAEI for Ontology Matching. In this paper, we review existing approaches to IR evaluation and analyse evaluation activities for Semantic Web search systems. Through a discussion of these, we identify their weaknesses and highlight the future need for a more comprehensive evaluation framework that addresses current limitations.",
"title": ""
},
{
"docid": "3fe585dbb422a88f41f1100f9b2dd477",
"text": "Synchronous reluctance motor (SynRM) is a potential candidate for high starting torque requirements of traction drives. Any demagnetization risk is prevented since there is not any permanent magnet on the rotor or stator structure. On the other hand, the high rotor starting current problem, that is common in induction machines is ignored since there is not any winding on the rotor. Indeed, absence of permanent magnet in motor structure and its simplicity leads to lower finished cost in comparison with other competitors. Also high average torque and low ripple content is important in electrical drives employed in electric vehicle applications. High amount of torque ripple is one of the problems of SynRM, which is considered in many researches. In this paper, a new design of the SynRM is proposed in order to reduce the torque ripple while maintaining the average torque. For this purpose, auxiliary flux barriers in the rotor structure are employed that reduce the torque ripple significantly. Proposed design electromagnetic performance is simulated by finite element analysis. It is shown that the proposed design reduces torque ripple significantly without any reduction in average torque.",
"title": ""
},
{
"docid": "2ddc4919771402dabedd2020649d1938",
"text": "Increase in energy demand has made the renewable resources more attractive. Additionally, use of renewable energy sources reduces combustion of fossil fuels and the consequent CO2 emission which is the principal cause of global warming. The concept of photovoltaic-Wind hybrid system is well known and currently thousands of PV-Wind based power systems are being deployed worldwide, for providing power to small, remote, grid-independent applications. This paper shows the way to design the aspects of a hybrid power system that will target remote users. It emphasizes the renewable hybrid power system to obtain a reliable autonomous system with the optimization of the components size and the improvement of the cost. The system can provide electricity for a remote located village. The main power of the hybrid system comes from the photovoltaic panels and wind generators, while the batteries are used as backup units. The optimization software used for this paper is HOMER. HOMER is a design model that determines the optimal architecture and control strategy of the hybrid system. The simulation results indicate that the proposed hybrid system would be a feasible solution for distributed generation of electric power for stand-alone applications at remote locations",
"title": ""
},
{
"docid": "7548b99b332677e01ca6d74592f62ab1",
"text": "This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the \"RobotCub\" project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project \"ITALK\" on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform.",
"title": ""
},
{
"docid": "2b38ac7d46a1b3555fef49a4e02cac39",
"text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"title": ""
},
{
"docid": "16a1f15e8e414b59a230fb4a28c53cc7",
"text": "In this study we examined whether the effects of mental fatigue on behaviour are due to reduced action monitoring as indexed by the error related negativity (Ne/ERN), N2 and contingent negative variation (CNV) event-related potential (ERP) components. Therefore, we had subjects perform a task, which required a high degree of action monitoring, continuously for 2h. In addition we tried to relate the observed behavioural and electrophysiological changes to motivational processes and individual differences. Changes in task performance due to fatigue were accompanied by a decrease in Ne/ERN and N2 amplitude, reflecting impaired action monitoring, as well as a decrease in CNV amplitude which reflects reduced response preparation with increasing fatigue. Increasing the motivational level of our subjects resulted in changes in behaviour and brain activity that were different for individual subjects. Subjects that increased their performance accuracy displayed an increase in Ne/ERN amplitude, while subjects that increased their response speed displayed an increase in CNV amplitude. We will discuss the effects prolonged task performance on the behavioural and physiological indices of action monitoring, as well as the relationship between fatigue, motivation and individual differences.",
"title": ""
},
{
"docid": "5aa39257fd9914cd27abd04d8279d10e",
"text": "Many real-world planning problems require generating plans that maximize the parallelism inherent in a problem. There are a number of partial-order planners that generate such plans; however, in most of these planners it is unclear under what conditions the resulting plans will be correct and whether the plaltner can even find a plan if one exists. This paper identifies the underlying assumptions about when a partial plan can be executed in parallel, defines the classes of parallel plans that can be generated by different partialorder planners, and describes the changes required to turn ucPoP into a parallel execution planner. In \"addition, we describe how this planner can be applied to the problem of query access planning, where parallel execution produces ubstantial reductions in overall execution time.",
"title": ""
},
{
"docid": "47bf54c0d51596f39929e8f3e572a051",
"text": "Parameterizations of triangulated surfaces are used in an increasing number of mesh processing applications for various purposes. Although demands vary, they are often required to preserve the surface metric and thus minimize angle, area and length deformation. However, most of the existing techniques primarily target at angle preservation while disregarding global area deformation. In this paper an energy functional is proposed, that quantifies angle and global area deformations simultaneously, while the relative importance between angle and area preservation can be controlled by the user through a parameter. We show how this parameter can be chosen to obtain parameterizations, that are optimized for an uniform sampling of the surface of a model. Maps obtained by minimizing this energy are well suited for applications that desire an uniform surface sampling, like re-meshing or mapping regularly patterned textures. Besides being invariant under rotation and translation of the domain, the energy is designed to prevent face flips during minimization and does not require a fixed boundary in the parameter domain. Although the energy is nonlinear, we show how it can be minimized efficiently using non-linear conjugate gradient methods in a hierarchical optimization framework and prove the convergence of the algorithm. The ability to control the tradeoff between the degree of angle and global area preservation is demonstrated for several models of varying complexity.",
"title": ""
},
{
"docid": "6a4815ee043e83994e4345b6f4352198",
"text": "Object detection – the computer vision task dealing with detecting instances of objects of a certain class (e.g ., ’car’, ’plane’, etc.) in images – attracted a lot of attention from the community during the last 5 years. This strong interest can be explained not only by the importance this task has for many applications but also by the phenomenal advances in this area since the arrival of deep convolutional neural networks (DCNN). This article reviews the recent literature on object detection with deep CNN, in a comprehensive way, and provides an in-depth view of these recent advances. The survey covers not only the typical architectures (SSD, YOLO, Faster-RCNN) but also discusses the challenges currently met by the community and goes on to show how the problem of object detection can be extended. This survey also reviews the public datasets and associated state-of-the-art algorithms.",
"title": ""
},
{
"docid": "ccd663355ff6070b3668580150545cea",
"text": "In this paper, the user effects on mobile terminal antennas at 28 GHz are statistically investigated with the parameters of body loss, coverage efficiency, and power in the shadow. The data are obtained from the measurements of 12 users in data and talk modes, with the antenna placed on the top and bottom of the chassis. In the measurements, the users hold the phone naturally. The radiation patterns and shadowing regions are also studied. It is found that a significant amount of power can propagate into the shadow of the user by creeping waves and diffractions. A new metric is defined to characterize this phenomenon. A mean body loss of 3.2–4 dB is expected in talk mode, which is also similar to the data mode with the bottom antenna. A body loss of 1 dB is expected in data mode with the top antenna location. The variation of the body loss between the users at 28 GHz is less than 2 dB, which is much smaller than that of the conventional cellular bands below 3 GHz. The coverage efficiency is significantly reduced in talk mode, but only slightly affected in data mode.",
"title": ""
},
{
"docid": "4d9cf5a29ebb1249772ebb6a393c5a4e",
"text": "This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-fold. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving the image inverse problem is formulated using JSM under a regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split Bregman-based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring, and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "afbd52acb39600e8a0804f2140ebf4fc",
"text": "This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationallyweak one. Bywrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required serverclient workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.",
"title": ""
},
{
"docid": "a473465e2e567f260089bb39806f79a6",
"text": "The objective of the study presented was to determine the prevalence of oral problems--eg, dental erosion, rough surfaces, pain--among young competitive swimmers in India, because no such studies are reported. Its design was a cross-sectional study with a questionnaire and clinical examination protocols. It was conducted in a community setting on those who were involved in regular swimming in pools. Questionnaires were distributed to swimmers at the 25th State Level Swimming Competition, held at Thane Municipal Corporation's Swimming Pool, India. Those who returned completed questionnaires were also clinically examined. Questionnaires were analyzed and clinical examinations focused on either the presence or absence of dental erosions and rough surfaces. Reported results were on 100 swimmers who met the inclusion criteria. They included 75 males with a mean age of 18.6 ± 6.3 years and 25 females with a mean age of 15.3 ± 7.02 years. Among them, 90% showed dental erosion, 94% exhibited rough surfaces, and 88% were found to be having tooth pain of varying severity. Erosion and rough surfaces were found to be directly proportional to the duration of swimming. The authors concluded that the prevalence of dental erosion, rough surfaces, and pain is found to be very common among competitive swimmers. They recommend that swimmers practice good preventive measures and clinicians evaluate them for possible swimmer's erosion.",
"title": ""
},
{
"docid": "38024169edcf1272efc7013b68d1c5cb",
"text": "Fractal dimension measures the geometrical complexity of images. Lacunarity being a measure of spatial heterogeneity can be used to differentiate between images that have similar fractal dimensions but different appearances. This paper presents a method to combine fractal dimension (FD) and lacunarity for better texture recognition. For the estimation of the fractal dimension an improved algorithm is presented. This algorithm uses new box-counting measure based on the statistical distribution of the gray levels of the ‘‘boxes’’. Also for the lacunarity estimation, new and faster gliding-box method is proposed, which utilizes summed area tables and Levenberg–Marquardt method. Methods are tested using Brodatz texture database (complete set), a subset of the Oulu rotation invariant texture database (Brodatz subset), and UIUC texture database (partial). Results from the tests showed that combining fractal dimension and lacunarity can improve recognition of textures. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "458633abcbb030b9e58e432d5b539950",
"text": "In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.",
"title": ""
},
{
"docid": "a488509590cd496669bdcc3ce8cc5fe5",
"text": "Ghrelin is an endogenous ligand for the growth hormone secretagogue receptor and a well-characterized food intake regulatory peptide. Hypothalamic ghrelin-, neuropeptide Y (NPY)-, and orexin-containing neurons form a feeding regulatory circuit. Orexins and NPY are also implicated in sleep-wake regulation. Sleep responses and motor activity after central administration of 0.2, 1, or 5 microg ghrelin in free-feeding rats as well as in feeding-restricted rats (1 microg dose) were determined. Food and water intake and behavioral responses after the light onset injection of saline or 1 microg ghrelin were also recorded. Light onset injection of ghrelin suppressed non-rapid-eye-movement sleep (NREMS) and rapid-eye-movement sleep (REMS) for 2 h. In the first hour, ghrelin induced increases in behavioral activity including feeding, exploring, and grooming and stimulated food and water intake. Ghrelin administration at dark onset also elicited NREMS and REMS suppression in hours 1 and 2, but the effect was not as marked as that, which occurred in the light period. In hours 3-12, a secondary NREMS increase was observed after some doses of ghrelin. In the feeding-restricted rats, ghrelin suppressed NREMS in hours 1 and 2 and REMS in hours 3-12. Data are consistent with the notion that ghrelin has a role in the integration of feeding, metabolism, and sleep regulation.",
"title": ""
},
{
"docid": "05049ac85552c32f2c98d7249a038522",
"text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.",
"title": ""
},
{
"docid": "3baafb85e1b50d759f1a6033295dc9fd",
"text": "A 12-year-old girl with a history of alopecia areata and vitiligo presented with an asymptomatic brownish dirt-like lesion on the left postauricular skin of approximately 3 years of duration. The patient and her mother tried to clean the “dirt” with water and soap without success. There was no history of rapid weight gain. She had no history of an inflammatory dermatosis in the affected area. Physical examination revealed a dirt-like brownish plaque on the left postauricular skin (Figure 1). Rubbing of the lesion with a 70% isopropyl alcohol-soaked gauze pad and pressure resulted in complete disappearance of the lesion (Figure 2). A diagnosis of terra firma-forme dermatosis was, thus, confirmed. Terra firma-forme dermatosis is characterized by an asymptomatic brownish-black, dirt-like patch/plaque. Affected individuals often have normal hygiene habits. Characteristically, the lesion cannot be removed by conventional washing with soap and water but can be removed by wiping with isopropyl alcohol while applying some pressure. Terra firmaforme dermatosis is most frequently seen in prepubertal children and adolescents. It is believed that the condition results from delayed maturation of keratinocytes with incomplete development of keratin squames, and retention of keratinocytes and melanin within the epidermis. Sites of predilection include the neck, followed by the ankles and trunk. The main differential diagnoses are dermatosis neglecta and acanthosis nigricans. Dermatosis neglecta typically affects individuals of any age with neglected hygiene. The lesions can be removed with normal washing with soap and water as well as with alcohol swab or cotton ball. The lesion of acanthosis nigricans consists of dark, velvety thickening of the skin usually on the nape and sides of the neck. The condition is most commonly associated with obesity. The hyperpigmentation or “dirt” cannot be removed either by normal washing with soap and water or alcohol swab or cotton ball. ■",
"title": ""
},
{
"docid": "5764bcf220280c4c3be28375cdcbce26",
"text": "This paper introduces a data-driven process for designing and fabricating materials with desired deformation behavior. Our process starts with measuring deformation properties of base materials. For each base material we acquire a set of example deformations, and we represent the material as a non-linear stress-strain relationship in a finite-element model. We have validated our material measurement process by comparing simulations of arbitrary stacks of base materials with measured deformations of fabricated material stacks. After material measurement, our process continues with designing stacked layers of base materials. We introduce an optimization process that finds the best combination of stacked layers that meets a user's criteria specified by example deformations. Our algorithm employs a number of strategies to prune poor solutions from the combinatorial search space. We demonstrate the complete process by designing and fabricating objects with complex heterogeneous materials using modern multi-material 3D printers.",
"title": ""
},
{
"docid": "9bcc2b61333bd0490857edac99e797c7",
"text": "The performance of value and policy iteration can be dramatically improved by eliminating redundant or useless backups, and by backing up states in the right order. We study several methods designed to accelerate these iterative solvers, including prioritization, partitioning, and variable reordering. We generate a family of algorithms by combining several of the methods discussed, and present extensive empirical evidence demonstrating that performance can improve by several orders of magnitude for many problems, while preserving accuracy and convergence guarantees.",
"title": ""
}
] |
scidocsrr
|
02ccf5cf5dc6a7976ba1f7284a38722a
|
Revealing Dimensions of Thinking in Open-Ended Self-Descriptions: An Automated Meaning Extraction Method for Natural Language.
|
[
{
"docid": "627b14801c8728adf02b75e8eb62896f",
"text": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies.",
"title": ""
}
] |
[
{
"docid": "0e6c562a1760344ef59e40d7774b56fe",
"text": "Sparsity is widely observed in convolutional neural networks by zeroing a large portion of both activations and weights without impairing the result. By keeping the data in a compressed-sparse format, the energy consumption could be considerably cut down due to less memory traffic. However, the wide SIMD-like MAC engine adopted in many CNN accelerators can not support the compressed input due to the data misalignment. In this work, a novel Dual Indexing Module (DIM) is proposed to efficiently handle the alignment issue where activations and weights are both kept in compressed-sparse format. The DIM is implemented in a representative SIMD-like CNN accelerator, and able to exploit both compressed-sparse activations and weights. The synthesis results with 40nm technology have shown that DIM can enhance up to 46% of energy consumption and 55.4% Energy-Delay-Product (EDP).",
"title": ""
},
{
"docid": "159c836d811aef6ede9a1c178095d947",
"text": "One of the more interesting developments recently gaining popularity in the server-side JavaScript space is Node.js. It's a framework for developing high-performance, concurrent programs that don't rely on the mainstream multithreading approach but use asynchronous I/O with an event-driven programming model.",
"title": ""
},
{
"docid": "c20549d78c2b5d393a59fa83718e1004",
"text": "This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TV-based image deburring problem. To achieve this task, we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints.",
"title": ""
},
{
"docid": "065ca3deb8cb266f741feb67e404acb5",
"text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet",
"title": ""
},
{
"docid": "3ab5a2a767e1d51996820a1fdda94fef",
"text": "Lattice based Cryptography is an important sector which is ensuring cloud data security in present world. It provides a stronger belief of security in a way that the average-case of certain problem is akin to the worst-case of those problems. There are strong indications that these problems will remain safe under the availability of quantum computers, unlike the widely used issues like integer-factorization and discrete logarithm upon which most of the typical cryptosystems relies. In this paper, we tend to discuss the security dimension of Lattice based cryptography whose power lies within the hardness of lattice problems. Goldreich-Goldwasser-Halevi (GGH) public-key cryptosystem is an exemplar of lattice-based cryptosystems. Its security depends on the hardness of lattice issues. GGH is easy to understand and is widely used due to its straightforward data encoding and decoding procedures. Phong Nguyen, in his paper showed that there's a significant flaw within the style of the GGH scheme as ciphertext leaks information on the plaintext. Due to this flaw the practical usage of GGH cryptosystem is limiting to some extent. So as to enhance the safety and usefulness of the GGH cryptosystem, in this paper we proposed an improvised GGH encryption and decryption functions which prevented information leakage. We have implemented a package in MATLAB for the improvement of GGH cryptosystem. In our work we proposed some methods to improve GGH algorithm and make it more secure and information leakage resistant.",
"title": ""
},
{
"docid": "4bc85c4035c8bd4d502b13613147272c",
"text": "We present the first real-time method for refinement of depth data using shape-from-shading in general uncontrolled scenes. Per frame, our real-time algorithm takes raw noisy depth data and an aligned RGB image as input, and approximates the time-varying incident lighting, which is then used for geometry refinement. This leads to dramatically enhanced depth maps at 30Hz. Our algorithm makes few scene assumptions, handling arbitrary scene objects even under motion. To enable this type of real-time depth map enhancement, we contribute a new highly parallel algorithm that reformulates the inverse rendering optimization problem in prior work, allowing us to estimate lighting and shape in a temporally coherent way at video frame-rates. Our optimization problem is minimized using a new regular grid Gauss-Newton solver implemented fully on the GPU. We demonstrate results showing enhanced depth maps, which are comparable to offline methods but are computed orders of magnitude faster, as well as baseline comparisons with online filtering-based methods. We conclude with applications of our higher quality depth maps for improved real-time surface reconstruction and performance capture.",
"title": ""
},
{
"docid": "24e2c8f8b3de74653532e297ce56cdf2",
"text": "We describe a method of incorporating taskspecific cost functions into standard conditional log-likelihood (CLL) training of linear structured prediction models. Recently introduced in the speech recognition community, we describe the method generally for structured models, highlight connections to CLL and max-margin learning for structured prediction (Taskar et al., 2003), and show that the method optimizes a bound on risk. The approach is simple, efficient, and easy to implement, requiring very little change to an existing CLL implementation. We present experimental results comparing with several commonly-used methods for training structured predictors for named-entity recognition.",
"title": ""
},
{
"docid": "aaf81989a3d1081baff7aea34b0b97f1",
"text": "Two-dimensional contingency or co-occurrence tables arise frequently in important applications such as text, web-log and market-basket data analysis. A basic problem in contingency table analysis is co-clustering: simultaneous clustering of the rows and columns. A novel theoretical formulation views the contingency table as an empirical joint probability distribution of two discrete random variables and poses the co-clustering problem as an optimization problem in information theory---the optimal co-clustering maximizes the mutual information between the clustered random variables subject to constraints on the number of row and column clusters. We present an innovative co-clustering algorithm that monotonically increases the preserved mutual information by intertwining both the row and column clusterings at all stages. Using the practical example of simultaneous word-document clustering, we demonstrate that our algorithm works well in practice, especially in the presence of sparsity and high-dimensionality.",
"title": ""
},
{
"docid": "5f63681c406856bc0664ee5a32d04b18",
"text": "In 2008, the emergence of the blockchain as the foundation of the first-ever decentralized cryptocurrency not only revolutionized the financial industry but proved a boon for peer-to-peer (P2P) information exchange in the most secure, efficient, and transparent manner. The blockchain is a public ledger that works like a log by keeping a record of all transactions in chronological order, secured by an appropriate consensus mechanism and providing an immutable record. Its exceptional characteristics include immutability, irreversibility, decentralization, persistence, and anonymity.",
"title": ""
},
{
"docid": "4bc59893068c7af78b3f7065b7b9d9bf",
"text": "Radiological images are increasingly being used in healthcare and medical research. There is, consequently, widespread interest in accurately relating information in the different images for diagnosis, treatment and basic science. This article reviews registration techniques used to solve this problem, and describes the wide variety of applications to which these techniques are applied. Applications of image registration include combining images of the same subject from different modalities, aligning temporal sequences of images to compensate for motion of the subject between scans, image guidance during interventions and aligning images from multiple subjects in cohort studies. Current registration algorithms can, in many cases, automatically register images that are related by a rigid body transformation (i.e. where tissue deformation can be ignored). There has also been substantial progress in non-rigid registration algorithms that can compensate for tissue deformation, or align images from different subjects. Nevertheless many registration problems remain unsolved, and this is likely to continue to be an active field of research in the future.",
"title": ""
},
{
"docid": "bf760ee2c4fe9c04f07638bd91d9675e",
"text": "Agile development methods are commonly used to iteratively develop the information systems and they can easily handle ever-changing business requirements. Scrum is one of the most popular agile software development frameworks. The popularity is caused by the simplified process framework and its focus on teamwork. The objective of Scrum is to deliver working software and demonstrate it to the customer faster and more frequent during the software development project. However the security requirements for the developing information systems have often a low priority. This requirements prioritization issue results in the situations where the solution meets all the business requirements but it is vulnerable to potential security threats. The major benefit of the Scrum framework is the iterative development approach and the opportunity to automate penetration tests. Therefore the security vulnerabilities can be discovered and solved more often which will positively contribute to the overall information system protection against potential hackers. In this research paper the authors propose how the agile software development framework Scrum can be enriched by considering the penetration tests and related security requirements during the software development lifecycle. Authors apply in this paper the knowledge and expertise from their previous work focused on development of the new information system penetration tests methodology PETA with focus on using COBIT 4.1 as the framework for management of these tests, and on previous work focused on tailoring the project management framework PRINCE2 with Scrum. The outcomes of this paper can be used primarily by the security managers, users, developers and auditors. The security managers may benefit from the iterative software development approach and penetration tests automation. The developers and users will better understand the importance of the penetration tests and they will learn how to effectively embed the tests into the agile development lifecycle. Last but not least the auditors may use the outcomes of this paper as recommendations for companies struggling with penetrations testing embedded in the agile software development process.",
"title": ""
},
{
"docid": "2c1604c1592b974c78568bbe2f71485c",
"text": "BACKGROUND\nA self-rated measure of health anxiety should be sensitive across the full range of intensity (from mild concern to frank hypochondriasis) and should differentiate people suffering from health anxiety from those who have actual physical illness but who are not excessively concerned about their health. It should also encompass the full range of clinical symptoms characteristic of clinical hypochondriasis. The development and validation of such a scale is described.\n\n\nMETHOD\nThree studies were conducted. First, the questionnaire was validated by comparing the responses of patients suffering from hypochondriasis with those suffering from hypochondriasis and panic disorder, panic disorder, social phobia and non-patient controls. Secondly, a state version of the questionnaire was administered to patients undergoing cognitive-behavioural treatment or wait-list in order to examine the measure's sensitivity to change. In the third study, a shortened version was developed and validated in similar types of sample, and in a range of samples of people seeking medical help for physical illness.\n\n\nRESULTS\nThe scale was found to be reliable and to have a high internal consistency. Hypochondriacal patients scored significantly higher than anxiety disorder patients, including both social phobic patients and panic disorder patients as well as normal controls. In the second study, a 'state' version of the scale was found to be sensitive to treatment effects, and to correlate very highly with a clinician rating based on an interview of present clinical state. A development and refinement of the scale (intended to reflect more fully the range of symptoms of and reactions to hypochondriasis) was found to be reliable and valid. A very short (14 item) version of the scale was found to have comparable properties to the full length scale.\n\n\nCONCLUSIONS\nThe HAI is a reliable and valid measure of health anxiety. It is likely to be useful as a brief screening instrument, as there is a short form which correlates highly with the longer version.",
"title": ""
},
{
"docid": "587b6685eaa7d2784b5adc656a25a34a",
"text": "We present a novel response generation system. The system assumes the hypothesis that participants in a conversation base their response not only on previous dialog utterances but also on their background knowledge. Our model is based on a Recurrent Neural Network (RNN) that is trained over concatenated sequences of comments, a Convolution Neural Network that is trained over Wikipedia sentences and a formulation that couples the two trained embeddings in a multimodal space. We create a dataset of aligned Wikipedia sentences and sequences of Reddit utterances, which we we use to train our model. Given a sequence of past utterances and a set of sentences that represent the background knowledge, our end-to-end learnable model is able to generate context-sensitive and knowledge-driven responses by leveraging the alignment of two different data sources. Our approach achieves up to 55% improvement in perplexity compared to purely sequential models based on RNNs that are trained only on sequences of utterances.",
"title": ""
},
{
"docid": "3d95e2db34f0b1f999833946a173de3d",
"text": "Due to the rapid development of mobile social networks, mobile big data play an important role in providing mobile social users with various mobile services. However, as mobile big data have inherent properties, current MSNs face a challenge to provide mobile social user with a satisfactory quality of experience. Therefore, in this article, we propose a novel framework to deliver mobile big data over content- centric mobile social networks. At first, the characteristics and challenges of mobile big data are studied. Then the content-centric network architecture to deliver mobile big data in MSNs is presented, where each datum consists of interest packets and data packets, respectively. Next, how to select the agent node to forward interest packets and the relay node to transmit data packets are given by defining priorities of interest packets and data packets. Finally, simulation results show the performance of our framework with varied parameters.",
"title": ""
},
{
"docid": "2960d6ab540cac17bb37fd4a4645afd0",
"text": "This paper proposes a new walking pattern generation method for humanoid robots. The proposed method consists of feedforward control and feedback control for walking pattern generation. The pole placement method as a feedback controller changes the poles of system in order to generate more stable and smoother walking pattern. The advanced pole-zero cancelation by series approximation(PZCSA) as a feedforward controller plays a role of reducing the inherent property of linear inverted pendulum model (LIPM), that is, non-minimum phase property due to an unstable zero of LIPM and tracking efficiently the desired zero moment point (ZMP). The efficiency of the proposed method is verified by three simulations such as arbitrary walking step length, arbitrary walking phase time and sudden change of walking path.",
"title": ""
},
{
"docid": "9d75520f138bcf7c529488f29d01efbb",
"text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.",
"title": ""
},
{
"docid": "fd0318e6a6ea3dbf422235b7008c3006",
"text": "Multiple myeloma (MM), a cancer of terminally differentiated plasma cells, is the second most common hematological malignancy. The disease is characterized by the accumulation of abnormal plasma cells in the bone marrow that remains in close association with other cells in the marrow microenvironment. In addition to the genomic alterations that commonly occur in MM, the interaction with cells in the marrow microenvironment promotes signaling events within the myeloma cells that enhances survival of MM cells. The phosphoinositide 3-kinase (PI3K)/protein kinase B (AKT)/mammalian target of rapamycin (mTOR) is such a pathway that is aberrantly activated in a large proportion of MM patients through numerous mechanisms and can play a role in resistance to several existing therapies making this a central pathway in MM pathophysiology. Here, we review the pathway, its role in MM, promising preclinical results obtained thus far and the clinical promise that drugs targeting this pathway have in MM.",
"title": ""
},
{
"docid": "b9f86454a57c04ca5e3e9bdf95d9058c",
"text": "In view of significant increase in the research work on the brake disc in past few years, this article attempts to identify and highlight the various researches that are most relevant to analysis and optimization of brake disc. In the present article a keen review on the studies done on brake disc by previous researchers between (19982015) is presented. This literature review covers the important aspects of brake disc with the emphasis on material selection methods, thermal analysis, structural analysis, FEA and optimization of disc brake. This literature progressively discusses about the research methodology adopted and the outcome of the research work done by past researchers. This review is intended to give the readers a brief about the variety of the research work done on brake disc. Keywords--Brake disc, FEA, Optimization",
"title": ""
},
{
"docid": "865c0c0b4ab0e063e5caa3387c1a8741",
"text": "i",
"title": ""
},
{
"docid": "581ed4779ddde2d6f00da0975e71a73b",
"text": "Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.",
"title": ""
}
] |
scidocsrr
|
4f0eaa4bd9611a83da598ee72817a19c
|
Face Expression Recognition and Analysis: The State of the Art
|
[
{
"docid": "0f3a795be7101977171a9232e4f98bf4",
"text": "Emotions are universally recognized from facial expressions--or so it has been claimed. To support that claim, research has been carried out in various modern cultures and in cultures relatively isolated from Western influence. A review of the methods used in that research raises questions of its ecological, convergent, and internal validity. Forced-choice response format, within-subject design, preselected photographs of posed facial expressions, and other features of method are each problematic. When they are altered, less supportive or nonsupportive results occur. When they are combined, these method factors may help to shape the results. Facial expressions and emotion labels are probably associated, but the association may vary with culture and is loose enough to be consistent with various alternative accounts, 8 of which are discussed.",
"title": ""
}
] |
[
{
"docid": "b0ea2ca170a8d0bcf4bd5dc8311c6201",
"text": "A cascade of sigma-delta modulator stages that employ a feedforward architecture to reduce the signal ranges required at the integrator inputs and outputs has been used to implement a broadband, high-resolution oversampling CMOS analog-to-digital converter capable of operating from low-supply voltages. An experimental prototype of the proposed architecture has been integrated in a 0.25-/spl mu/m CMOS technology and operates from an analog supply of only 1.2 V. At a sampling rate of 40 MSamples/sec, it achieves a dynamic range of 96 dB for a 1.25-MHz signal bandwidth. The analog power dissipation is 44 mW.",
"title": ""
},
{
"docid": "72600262f8c977bcde54332f23ba9d92",
"text": "Migraine is a common multifactorial episodic brain disorder with strong genetic basis. Monogenic subtypes include rare familial hemiplegic migraine, cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy, familial advanced sleep-phase syndrome (FASPS), and retinal vasculopathy with cerebral leukodystrophy. Functional studies of diseasecausing mutations in cellular and/or transgenic models revealed enhanced (glutamatergic) neurotransmission and abnormal vascular function as key migraine mechanisms. Common forms of migraine (both with and without an aura), instead, are thought to have a polygenic makeup. Genome-wide association studies have already identified over a dozen genes involved in neuronal and vascular mechanisms. Here, we review the current state of molecular genetic research in migraine, also with respect to functional and pathway analyses.Wewill also discuss how novel experimental approaches for the identification and functional characterization of migraine genes, such as next-generation sequencing, induced pluripotent stem cell, and optogenetic technologies will further our understanding of the molecular pathways involved in migraine pathogenesis.",
"title": ""
},
{
"docid": "2f50d412c0ee47d66718cb734bc25e1b",
"text": "Nowadays, a big part of people rely on available content in social media in their decisions (e.g., reviews and feedback on a topic or product). The possibility that anybody can leave a review provides a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research, and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this paper, we propose a novel framework, named NetSpam, which utilizes spam features for modeling review data sets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features helps us to obtain better results in terms of different metrics experimented on real-world review data sets from Yelp and Amazon Web sites. The results show that NetSpam outperforms the existing methods and among four categories of features, including review-behavioral, user-behavioral, review-linguistic, and user-linguistic, the first type of features performs better than the other categories.",
"title": ""
},
{
"docid": "142b1f178ade5b7ff554eae9cad27f69",
"text": "It is often desirable to be able to recognize when inputs to a recognition function learned in a supervised manner correspond to classes unseen at training time. With this ability, new class labels could be assigned to these inputs by a human operator, allowing them to be incorporated into the recognition function—ideally under an efficient incremental update mechanism. While good algorithms that assume inputs from a fixed set of classes exist, e.g. , artificial neural networks and kernel machines, it is not immediately obvious how to extend them to perform incremental learning in the presence of unknown query classes. Existing algorithms take little to no distributional information into account when learning recognition functions and lack a strong theoretical foundation. We address this gap by formulating a novel, theoretically sound classifier—the Extreme Value Machine (EVM). The EVM has a well-grounded interpretation derived from statistical Extreme Value Theory (EVT), and is the first classifier to be able to perform nonlinear kernel-free variable bandwidth incremental learning. Compared to other classifiers in the same deep network derived feature space, the EVM is accurate and efficient on an established benchmark partition of the ImageNet dataset.",
"title": ""
},
{
"docid": "59af1eb49108e672a35f7c242c5b4683",
"text": "“The value concept, more than any other, should occupy a central position . . . able to unify the apparently diverse interests of all the sciences concerned with human behavior.” These words, proclaiming the centrality of the value concept, were written by a psychologist (Rokeach, 1973, p. 3), but similar stands have been taken by sociologists (e.g., Williams, 1968) and anthropologists (e.g., Kluckhohn, 1951). These theorists view values as the criteria people use to select and justify actions and to evaluate people (including the self) and events. We, too, adopt this view of values as criteria rather than as qualities inherent in objects. This article discusses work that is part of a larger project intended to explore the importance of values in a wide variety of contexts. The project addresses three broad questions about values. First, how are the value priorities of individuals affected by their social experiences? That is, how do the common experiences people have, because of their shared locations in the social structure (their education, age, gender, occupation, etc.), influence their value priorities? And, how do individuals’ unique experiences (trauma, relations with parents, immigration, etc.) affect their value priorities? Second, how do the value priorities held by individuals affect their behavioral orientations and choices? That is, how do value priorities influence ideologies, attitudes, and actions in the political, religious, environmental, and other domains?",
"title": ""
},
{
"docid": "37d77131c6100aceb4a4d49a5416546f",
"text": "Automated medical image analysis has a significant value in diagnosis and treatment of lesions. Brain tumors segmentation has a special importance and difficulty due to the difference in appearances and shapes of the different tumor regions in magnetic resonance images. Additionally the data sets are heterogeneous and usually limited in size in comparison with the computer vision problems. The recently proposed adversarial training has shown promising results in generative image modeling. In this paper we propose a novel end-to-end trainable architecture for brain tumor semantic segmentation through conditional adversarial training. We exploit conditional Generative Adversarial Network (cGAN) and train a semantic segmentation Convolution Neural Network (CNN) along with an adversarial network that discriminates segmentation maps coming from the ground truth or from the segmentation network for BraTS 2017 segmentation task[15,4,2,3]. We also propose an end-to-end trainable CNN for survival day prediction based on deep learning techniques for BraTS 2017 prediction task [15,4,2,3]. The experimental results demonstrate the superior ability of the proposed approach for both tasks. The proposed model achieves on validation data a DICE score, Sensitivity and Specificity respectively 0.68, 0.99 and 0.98 for the whole tumor, regarding online judgment system.",
"title": ""
},
{
"docid": "529929af902100d25e08fe00d17e8c1a",
"text": "Engagement is the holy grail of learning whether it is in a classroom setting or an online learning platform. Studies have shown that engagement of the student while learning can benefit students as well as the teacher if the engagement level of the student is known. It is difficult to keep track of the engagement of each student in a face-to-face learning happening in a large classroom. It is even more difficult in an online learning platform where, the user is accessing the material at different instances. Automatic analysis of the engagement of students can help to better understand the state of the student in a classroom setting as well as online learning platforms and is more scalable. In this paper we propose a framework that uses Temporal Convolutional Network (TCN) to understand the intensity of engagement of students attending video material from Massive Open Online Courses (MOOCs). The input to the TCN network is the statistical features computed on 10 second segments of the video from the gaze, head pose and action unit intensities available in OpenFace library. The ability of the TCN architecture to capture long term dependencies gives it the ability to outperform other sequential models like LSTMs. On the given test set in the EmotiW 2018 sub challenge-\"Engagement in the Wild\", the proposed approach with Dilated-TCN achieved an average mean square error of 0.079.",
"title": ""
},
{
"docid": "63b210cc5e1214c51b642e9a4a2a1fb0",
"text": "This paper proposes a simplified method to compute the systolic and diastolic blood pressures from measured oscillometric blood-pressure waveforms. Therefore, the oscillometric waveform is analyzed in the frequency domain, which reveals that the measured blood-pressure signals are heavily disturbed by nonlinear contributions. The proposed approach will linearize the measured oscillometric waveform in order to obtain a more accurate and transparent estimation of the systolic and diastolic pressure based on a robust preprocessing technique. This new approach will be compared with the Korotkoff method and a commercially available noninvasive blood-pressure meter. This allows verification if the linearized approach contains as much information as the Korotkoff method in order to calculate a correct systolic and diastolic blood pressure.",
"title": ""
},
{
"docid": "67c444b9538ccfe7a2decdd11523dcd5",
"text": "Attention-based learning for fine-grained image recognition remains a challenging task, where most of the existing methods treat each object part in isolation, while neglecting the correlations among them. In addition, the multi-stage or multi-scale mechanisms involved make the existing methods less efficient and hard to be trained end-to-end. In this paper, we propose a novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images. Our method first learns multiple attention region features of each input image through the one-squeeze multi-excitation (OSME) module, and then apply the multi-attention multi-class constraint (MAMC) in a metric learning framework. For each anchor feature, the MAMC functions by pulling same-attention same-class features closer, while pushing different-attention or different-class features away. Our method can be easily trained end-to-end, and is highly efficient which requires only one training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog species dataset that surpasses similar existing datasets by category coverage, data volume and annotation quality. Extensive experiments are conducted to show the substantial improvements of our method on four benchmark datasets.",
"title": ""
},
{
"docid": "0884651e01add782a7d58b40f6ba078f",
"text": "Several statistics have been published dealing with failure causes of high voltage rotating machines i n general and power generators in particular [1 4]. Some of the se statistics only specify the part of the machine which failed without giving any deeper insight in the failure mechanism. Other publications distinguish between the damage which caused the machine to fail and the root cause which effect ed the damage. The survey of 1199 hydrogenerators c ar ied out by the CIGRE study committee SC11, EG11.02 provides an ex mple of such an investigation [5]. It gives det ail d results of 69 incidents. 56% of the failed machines showed an insulation damage, other major types being mecha ni al, thermal and bearing damages (Figure 1a). Root causes which led to these damages are subdivided into 7 differen t groups (Figure 1b).",
"title": ""
},
{
"docid": "0f645a88c44f2dd54689fd21d1444c01",
"text": "In this paper, Induction Motors (IM) are widely used in the industrial application due to a high power/weight ratio, high reliability and low cost. A Space Vector PWM (SVPWM) is utilized for PWM controlling scheme. The performance of both the speed and torque is promoted by a modified PI controller and V/F scalar control. A scalar control is a simple method and it's operated to control the magnitude of the control quantities in constant speed application. V/F scalar control has been implemented and compared with the PI controller. The simulation results showed that Indirect Field oriented control (IFOC) induction motor drive employ decoupling of the stator current components which produces torque and flux. The complete mathematical model of the system is described and simulated in MATLAB/SIMULINK. The simulation results provides a smooth speed response and high performance under various dynamic operations.",
"title": ""
},
{
"docid": "448040bcefe4a67a2a8c4b2cf75e7ebc",
"text": "Visual analytics has been widely studied in the past decade. One key to make visual analytics practical for both research and industrial applications is the appropriate definition and implementation of the visual analytics pipeline which provides effective abstractions for designing and implementing visual analytics systems. In this paper we review the previous work on visual analytics pipelines and individual modules from multiple perspectives: data, visualization, model and knowledge. In each module we discuss various representations and descriptions of pipelines inside the module, and compare the commonalities and the differences among them.",
"title": ""
},
{
"docid": "b3931762afefddc11d1111c681c8eed0",
"text": "We present a conceptually new and flexible method for multi-class open set classification. Unlike previous methods where unknown classes are inferred with respect to the feature or decision distance to the known classes, our approach is able to provide explicit modelling and decision score for unknown classes. The proposed method, called Generative OpenMax (G-OpenMax), extends OpenMax by employing generative adversarial networks (GANs) for novel category image synthesis. We validate the proposed method on two datasets of handwritten digits and characters, resulting in superior results over previous deep learning based method OpenMax Moreover, G-OpenMax provides a way to visualize samples representing the unknown classes from open space. Our simple and effective approach could serve as a new direction to tackle the challenging multi-class open set classification problem.",
"title": ""
},
{
"docid": "8418c151e724d5e23662a9d70c050df1",
"text": "The issuing of pseudonyms is an established approach for protecting the privacy of users while limiting access and preventing sybil attacks. To prevent pseudonym deanonymization through continuous observation and correlation, frequent and unlinkable pseudonym changes must be enabled. Existing approaches for realizing sybil-resistant pseudonymization and pseudonym change (PPC) are either inherently dependent on trusted third parties (TTPs) or involve significant computation overhead at end-user devices. In this paper, we investigate a novel, TTP-independent approach towards sybil-resistant PPC. Our proposal is based on the use of cryptocurrency block chains as general-purpose, append-only bulletin boards. We present a general approach as well as BitNym, a specific design based on the unmodified Bitcoin network. We discuss and propose TTP-independent mechanisms for realizing sybil-free initial access control, pseudonym validation and pseudonym mixing. Evaluation results demonstrate the practical feasibility of our approach and show that anonymity sets encompassing nearly the complete user population are easily achievable.",
"title": ""
},
{
"docid": "8cd77a6da9be2323ca9fc045079cbd50",
"text": "This paper provides an in-depth view of Terahertz Band (0.1–10 THz) communication, which is envisioned as a key technology to satisfy the increasing demand for higher speed wireless communication. THz Band communication will alleviate the spectrum scarcity and capacity limitations of current wireless systems, and enable new applications both in classical networking domains as well as in novel nanoscale communication paradigms. In this paper, the device design and development challenges for THz Band are surveyed first. The limitations and possible solutions for high-speed transceiver architectures are highlighted. The challenges for the development of new ultra-broadband antennas and very large antenna arrays are explained. When the devices are finally developed, then they need to communicate in the THz band. There exist many novel communication challenges such as propagation modeling, capacity analysis, modulation schemes, and other physical and link layer solutions, in the THz band which can be seen as a new frontier in the communication research. These challenges are treated in depth in this paper explaining the existing plethora of work and what still needs to be tackled. © 2014 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "dd2752d7a63418d1163b63f1d7578745",
"text": "Metabolic epilepsy is a metabolic abnormality which is associated with an increased risk of epilepsy development in affected individuals. Commonly used antiepileptic drugs are typically ineffective against metabolic epilepsy as they do not address its root cause. Presently, there is no review available which summarizes all the treatment options for metabolic epilepsy. Thus, we systematically reviewed literature which reported on the treatment, therapy and management of metabolic epilepsy from four databases, namely PubMed, Springer, Scopus and ScienceDirect. After applying our inclusion and exclusion criteria as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we reviewed a total of 43 articles. Based on the reviewed articles, we summarized the methods used for the treatment, therapy and management of metabolic epilepsy. These methods were tailored to address the root causes of the metabolic disturbances rather than targeting the epilepsy phenotype alone. Diet modification and dietary supplementation, alone or in combination with antiepileptic drugs, are used in tackling the different types of metabolic epilepsy. Identification, treatment, therapy and management of the underlying metabolic derangements can improve behavior, cognitive function and reduce seizure frequency and/or severity in patients.",
"title": ""
},
{
"docid": "b83fc3d06ff877a7851549bcd23aaed2",
"text": "Finding what is and what is not a salient object can be helpful in developing better features and models in salient object detection (SOD). In this paper, we investigate the images that are selected and discarded in constructing a new SOD dataset and find that many similar candidates, complex shape and low objectness are three main attributes of many non-salient objects. Moreover, objects may have diversified attributes that make them salient. As a result, we propose a novel salient object detector by ensembling linear exemplar regressors. We first select reliable foreground and background seeds using the boundary prior and then adopt locally linear embedding (LLE) to conduct manifold-preserving foregroundness propagation. In this manner, a foregroundness map can be generated to roughly pop-out salient objects and suppress non-salient ones with many similar candidates. Moreover, we extract the shape, foregroundness and attention descriptors to characterize the extracted object proposals, and a linear exemplar regressor is trained to encode how to detect salient proposals in a specific image. Finally, various linear exemplar regressors are ensembled to form a single detector that adapts to various scenarios. Extensive experimental results on 5 dataset and the new SOD dataset show that our approach outperforms 9 state-of-art methods.",
"title": ""
},
{
"docid": "b7789464ca4cfd39672187935d95e2fa",
"text": "MATLAB Toolbox functions and communication tools are developed, interfaced, and tested for the motion control of KUKA KR6-R900-SIXX.. This KUKA manipulator has a new controller version that uses KUKA.RobotSensorInterface s KUKA.RobotSensorInterface package to connect the KUKA controller with a remote PC via UDP/IP Ethernet connection. This toolbox includes many functions for initialization, networking, forward kinematics, inverse kinematics and homogeneous transformation.",
"title": ""
},
{
"docid": "1406e39d95505da3d7ab2b5c74c2e068",
"text": "Context: During requirements engineering, prioritization is performed to grade or rank requirements in their order of importance and subsequent implementation releases. It is a major step taken in making crucial decisions so as to increase the economic value of a system. Objective: The purpose of this study is to identify and analyze existing prioritization techniques in the context of the formulated research questions. Method: Search terms with relevant keywords were used to identify primary studies that relate requirements prioritization classified under journal articles, conference papers, workshops, symposiums, book chapters and IEEE bulletins. Results: 73 Primary studies were selected from the search processes. Out of these studies; 13 were journal articles, 35 were conference papers and 8 were workshop papers. Furthermore, contributions from symposiums as well as IEEE bulletins were 2 each while the total number of book chapters amounted to 13. Conclusion: Prioritization has been significantly discussed in the requirements engineering domain. However , it was generally discovered that, existing prioritization techniques suffer from a number of limitations which includes: lack of scalability, methods of dealing with rank updates during requirements evolution, coordination among stakeholders and requirements dependency issues. Also, the applicability of existing techniques in complex and real setting has not been reported yet.",
"title": ""
},
{
"docid": "d8f58ed573a9a719fde7b1817236cdeb",
"text": "In a remarkably short timeframe, developing apps for smartphones has gone from an arcane curiosity to an essential skill set. Employers are scrambling to find developers capable of transforming their ideas into apps. Educators interested in filling that void are likewise trying to keep up, and face difficult decisions in designing a meaningful course. There are a plethora of development platforms, but two stand out because of their popularity and divergent approaches - Apple's iOS, and Google's Android. In this paper, we will compare the two, and address the question: which should faculty teach?",
"title": ""
}
] |
scidocsrr
|
2167b09c3fbbca5c1df775e70ba45077
|
Anomaly Detection and Root Cause Analysis for LTE Radio Base Stations
|
[
{
"docid": "ca0d5a3f9571f288d244aee0b2c2f801",
"text": "This paper proposes, focusing on random forests, the increa singly used statistical method for classification and regre ssion problems introduced by Leo Breiman in 2001, to investigate two classi cal issues of variable selection. The first one is to find impor tant variables for interpretation and the second one is more rest rictive and try to design a good prediction model. The main co tribution is twofold: to provide some insights about the behavior of th e variable importance index based on random forests and to pr opose a strategy involving a ranking of explanatory variables usi ng the random forests score of importance and a stepwise asce nding variable introduction strategy.",
"title": ""
}
] |
[
{
"docid": "54eaba8cca6637bed13cc162edca3c4b",
"text": "Automatic and accurate lung field segmentation is an essential step for developing an automated computer-aided diagnosis system for chest radiographs. Although active shape model (ASM) has been useful in many medical imaging applications, lung field segmentation remains a challenge due to the superimposed anatomical structures. We propose an automatic lung field segmentation technique to address the inadequacy of ASM in lung field extraction. Experimental results using both normal and abnormal chest radiographs show that the proposed technique provides better performance and can achieve 3-6% improvement on accuracy, sensitivity and specificity compared to traditional ASM techniques.",
"title": ""
},
{
"docid": "ead343ffee692a8645420c58016c129d",
"text": "One of the most important applications in multiview imaging (MVI) is the development of advanced immersive viewing or visualization systems using, for instance, 3DTV. With the introduction of multiview TVs, it is expected that a new age of 3DTV systems will arrive in the near future. Image-based rendering (IBR) refers to a collection of techniques and representations that allow 3-D scenes and objects to be visualized in a realistic way without full 3-D model reconstruction. IBR uses images as the primary substrate. The potential for photorealistic visualization has tremendous appeal, and it has been receiving increasing attention over the years. Applications such as video games, virtual travel, and E-commerce stand to benefit from this technology. This article serves as a tutorial introduction and brief review of this important technology. First the classification, principles, and key research issues of IBR are discussed. Then, an object-based IBR system to illustrate the techniques involved and its potential application in view synthesis and processing are explained. Stereo matching, which is an important technique for depth estimation and view synthesis, is briefly explained and some of the top-ranked methods are highlighted. Finally, the challenging problem of interactive IBR is explained. Possible solutions and some state-of-the-art systems are also reviewed.",
"title": ""
},
{
"docid": "65dfecb5e0f4f658a19cd87fb94ff0ae",
"text": "Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.",
"title": ""
},
{
"docid": "c47f0c67147705e91ccf24250c2ec2de",
"text": "Here, we have strategically synthesized stable gold (AuNPsTyr, AuNPsTrp) and silver (AgNPsTyr) nanoparticles which are surface functionalized with either tyrosine or tryptophan residues and have examined their potential to inhibit amyloid aggregation of insulin. Inhibition of both spontaneous and seed-induced aggregation of insulin was observed in the presence of AuNPsTyr, AgNPsTyr, and AuNPsTrp nanoparticles. These nanoparticles also triggered the disassembly of insulin amyloid fibrils. Surface functionalization of amino acids appears to be important for the inhibition effect since isolated tryptophan and tyrosine molecules did not prevent insulin aggregation. Bioinformatics analysis predicts involvement of tyrosine in H-bonding interactions mediated by its C=O, –NH2, and aromatic moiety. These results offer significant opportunities for developing nanoparticle-based therapeutics against diseases related to protein aggregation.",
"title": ""
},
{
"docid": "c052f693b65a0f3189fc1e9f4df11162",
"text": "In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.",
"title": ""
},
{
"docid": "5b463701f83f7e6651260c8f55738146",
"text": "Heart disease diagnosis is a complex task which requires much experience and knowledge. Traditional way of predicting Heart disease is doctor’s examination or number of medical tests such as ECG, Stress Test, and Heart MRI etc. Nowadays, Health care industry contains huge amount of heath care data, which contains hidden information. This hidden information is useful for making effective decisions. Computer based information along with advanced Data mining techniques are used for appropriate results. Neural network is widely used tool for predicting Heart disease diagnosis. In this research paper, a Heart Disease Prediction system (HDPS) is developed using Neural network. The HDPS system predicts the likelihood of patient getting a Heart disease. For prediction, the system uses sex, blood pressure, cholesterol like 13 medical parameters. Here two more parameters are added i.e. obesity and smoking for better accuracy. From the results, it has been seen that neural network predict heart disease with nearly 100% accuracy.",
"title": ""
},
{
"docid": "395f97b609acb40a8922eb4a6d398c0a",
"text": "Ambient obscurance (AO) produces perceptually important illumination effects such as darkened corners, cracks, and wrinkles; proximity darkening; and contact shadows. We present the AO algorithm from the Alchemy engine used at Vicarious Visions in commercial games. It is based on a new derivation of screen-space obscurance for robustness, and the insight that a falloff function can cancel terms in a visibility integral to favor efficient operations. Alchemy creates contact shadows that conform to surfaces, captures obscurance from geometry of varying scale, and provides four intuitive appearance parameters: world-space radius and bias, and aesthetic intensity and contrast.\n The algorithm estimates obscurance at a pixel from sample points read from depth and normal buffers. It processes dynamic scenes at HD 720p resolution in about 4.5 ms on Xbox 360 and 3 ms on NVIDIA GeForce580.",
"title": ""
},
{
"docid": "bbfe7693d45e3343b30fad7f6c9279d8",
"text": "Vernier permanent magnet (VPM) machines can be utilized for direct drive applications by virtue of their high torque density and high efficiency. The purpose of this paper is to develop a general design guideline for split-slot low-speed VPM machines, generalize the operation principle, and illustrate the relationship among the numbers of the stator slots, coil poles, permanent magnet (PM) pole pairs, thereby laying a solid foundation for the design of various kinds of VPM machines. Depending on the PM locations, three newly designed VPM machines are reported in this paper and they are referred to as 1) rotor-PM Vernier machine, 2) stator-tooth-PM Vernier machine, and 3) stator-yoke-PM Vernier machine. The back-electromotive force (back-EMF) waveforms, static torque, and air-gap field distribution are predicted using time-stepping finite element method (TS-FEM). The performances of the proposed VPM machines are compared and reported.",
"title": ""
},
{
"docid": "a7e6a2145b9ae7ca2801a3df01f42f5e",
"text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.",
"title": ""
},
{
"docid": "89dea4ec4fd32a4a61be184d97ae5ba6",
"text": "In this paper, we propose Generative Adversarial Network (GAN) architectures that use Capsule Networks for image-synthesis. Based on the principal of positionalequivariance of features, Capsule Network’s ability to encode spatial relationships between the features of the image helps it become a more powerful critic in comparison to Convolutional Neural Networks (CNNs) used in current architectures for image synthesis. Our proposed GAN architectures learn the data manifold much faster and therefore, synthesize visually accurate images in significantly lesser number of training samples and training epochs in comparison to GANs and its variants that use CNNs. Apart from analyzing the quantitative results corresponding the images generated by different architectures, we also explore the reasons for the lower coverage and diversity explored by the GAN architectures that use CNN critics.",
"title": ""
},
{
"docid": "331c9dfa628f2bd045b6e0ad643a4d33",
"text": "What is most evident in the recent debate concerning new wetland regulations drafted by the U.S. Army Corps of Engineers is that small, isolated wetlands will likely continue to be lost. The critical biological question is whether small wetlands are expendable, and the fundamental issue is the lack of biologically relevant data on the value of wetlands, especially so-called “isolated” wetlands of small size. We used data from a geographic information system for natural-depression wetlands on the southeastern Atlantic coastal plain (U.S.A.) to examine the frequency distribution of wetland sizes and their nearest-wetland distances. Our results indicate that the majority of natural wetlands are small and that these small wetlands are rich in amphibian species and serve as an important source of juvenile recruits. Analyses simulating the loss of small wetlands indicate a large increase in the nearest-wetland distance that could impede “rescue” effects at the metapopulation level. We argue that small wetlands are extremely valuable for maintaining biodiversity, that the loss of small wetlands will cause a direct reduction in the connectance among remaining species populations, and that both existing and recently proposed legislation are inadequate for maintaining the biodiversity of wetland flora and fauna. Small wetlands are not expendable if our goal is to maintain present levels of species biodiversity. At the very least, based on these data, regulations should protect wetlands as small as 0.2 ha until additional data are available to compare diversity directly across a range of wetland sizes. Furthermore, we strongly advocate that wetland legislation focus not only on size but also on local and regional wetland distribution in order to protect ecological connectance and the source-sink dynamics of species populations. Son los Humedales Pequeños Prescindibles? Resumen: Algo muy evidente en el reciente debate sobre las nuevas regulaciones de humedales elaboradas por el cuerpo de ingenieros de la armada de los Estados Unidos es que los humedales aislados pequeños seguramente se continuarán perdiendo. La pregunta biológica crítica es si los humedales pequeños son prescindibles y e asunto fundamental es la falta de datos biológicos relevantes sobre el valor de los humedales, especialmente los llamados humedales “aislados” de tamaño pequeño. Utilizamos datos de GIS para humedales de depresiones naturales en la planicie del sureste de la costa Atlántica (U.S.A.) para examinar la distribución de frecuencias de los tamaños de humedales y las distancias a los humedales mas cercanos. Nuestros resultados indican que la mayoría de los humedales naturales son pequeños y que estos humedales pequeños son ricos en especies de anfibios y sirven como una fuente importante de reclutas juveniles. Análisis simulando la pérdida de humedales pequeños indican un gran incremento en la distancia al humedal mas cercano lo cual impediría efectos de “rescate” a nivel de metapoblación. Argumentamos que los humedales pequeños son extremadamente valiosos para el mantenimiento de la biodiversidad, que la pérdida de humedales pequeños causará una reducción directa en la conexión entre poblaciones de especies remanentes y que tanto la legislación propuesta como la existente son inadecuadas para mantener la biodiversidad de la flora y fauna de los humedales. Si nuestra meta es mantener los niveles actuales de biodiversidad de especies, los humedales pequeños no son prescindibles. En base en estos datos, las regulaciones deberían por lo menos proteger humedales tan pequeños como 0.2 ha hasta que se tengan a la mano datos adicionales para comPaper submitted April 1, 1998; revised manuscript accepted June 24, 1998. 1130 Expendability of Small Wetlands Semlitsch & Bodie Conservation Biology Volume 12, No. 5, October 1998 parar directamente la diversidad a lo largo de un rango de humedales de diferentes tamaños. Mas aún, abogamos fuertemente por que la regulación de los pantanos se enfoque no solo en el tamaño, sino también en la distribución local y regional de los humedales para poder proteger la conexión ecológica y las dinámicas fuente y sumidero de poblaciones de especies.",
"title": ""
},
{
"docid": "8d94e0480a96e19a9597d821182bb713",
"text": "Components of wind turbines are subjected to asymmetric loads caused by variable wind conditions. Carbon brushes are critical components of the wind turbine generator. Adequately maintaining and detecting abnormalities in the carbon brushes early is essential for proper turbine performance. In this paper, data-mining algorithms are applied for early prediction of carbon brush faults. Predicting generator brush faults early enables timely maintenance or replacement of brushes. The results discussed in this paper are based on analyzing generator brush faults that occurred on 27 wind turbines. The datasets used to analyze faults were collected from the supervisory control and data acquisition (SCADA) systems installed at the wind turbines. Twenty-four data-mining models are constructed to predict faults up to 12 h before the actual fault occurs. To increase the prediction accuracy of the models discussed, a data balancing approach is used. Four data-mining algorithms were studied to evaluate the quality of the models for predicting generator brush faults. Among the selected data-mining algorithms, the boosting tree algorithm provided the best prediction results. Research limitations attributed to the available datasets are discussed. [DOI: 10.1115/1.4005624]",
"title": ""
},
{
"docid": "17e280502d20361d920fa0e00aa6f98a",
"text": "In recent years, having the advantages of being small, low in cost and high in efficiency, Half bridge (HB) LLC resonant converter for power density and high efficiency is increasingly required in the battery charge application. The HB LLC resonant converters have been used for reducing the current and voltage stress and switching losses of the components. However, it is not suited for wide range of the voltage and output voltage due to the uneven voltage and current component's stresses. The HB LLC resonant for battery charge of on board is presented in this paper. The theoretical results are verified through an experimental prototype for battery charger on board.",
"title": ""
},
{
"docid": "11644dafde30ee5608167c04cb1f511c",
"text": "Dynamic Adaptive Streaming over HTTP (DASH) enables the video player to adapt the bitrate of the video while streaming to ensure playback without interruptions even with varying throughput. A DASH server hosts multiple representations of the same video, each of which is broken down into small segments of fixed playback duration. The video bitrate adaptation is purely driven by the player at the endhost. Typically, the player employs an Adaptive Bitrate (ABR) algorithm, that determines the most appropriate representation for the next segment to be downloaded, based on the current network conditions and user preferences. The aim of an ABR algorithm is to dynamically manage the Quality of Experience (QoE) of the user during the playback. ABR algorithms manage the QoE by maximizing the bitrate while at the same time trying to minimize the other QoE metrics: playback start time, duration and number of buffering events, and the number of bitrate switching events. Typically, the ABR algorithms manage the QoE by using the measured network throughput and buffer occupancy to adapt the playback bitrate. However, due to the video encoding schemes employed, the sizes of the individual segments may vary significantly. For low bandwidth networks, fluctuation in the segment sizes results in inaccurate estimation the expected segment fetch times, thereby resulting in inaccurate estimation of the optimum bitrate. In this paper we demonstrate how the Segment-Aware Rate Adaptation (SARA) algorithm, that considers the measured throughput, buffer occupancy, and the variation in segment sizes helps in better management of the users' QoE in a DASH system. By comparing with a typical throughput-based and buffer-based adaptation algorithm under varying network conditions, we demonstrate that SARA manages the QoE better, especially in a low bandwidth network. We also developed AStream, an open-source Python-based emulated DASH-video player that was used to evaluate three different ABR algorithms and measure the QoE metrics with each of them.",
"title": ""
},
{
"docid": "6f166a5ba1916c5836deb379481889cd",
"text": "Microbial activities drive the global nitrogen cycle, and in the past few years, our understanding of nitrogen cycling processes and the micro-organisms that mediate them has changed dramatically. During this time, the processes of anaerobic ammonium oxidation (anammox), and ammonia oxidation within the domain Archaea, have been recognized as two new links in the global nitrogen cycle. All available evidence indicates that these processes and organisms are critically important in the environment, and particularly in the ocean. Here we review what is currently known about the microbial ecology of anaerobic and archaeal ammonia oxidation, highlight relevant unknowns and discuss the implications of these discoveries for the global nitrogen and carbon cycles.",
"title": ""
},
{
"docid": "890da17049756c2da578d31fd3f06f90",
"text": "A novel and compact planar multiband multiple-input-multiple-output (MIMO) antenna is presented. The proposed antenna is composed of two symmetrical radiating elements connected by neutralizing line to cancel the reactive coupling. The radiating element is designed for different frequencies operating in GSM 900 MHz, DCS 1800 MHz, LTE-E 2300 MHz, and LTE-D 2600 MHz, which consists of a folded monopole and a beveled rectangular metal patch. The presented antenna is fed by using 50-Ω coplanar waveguide (CPW) transmission lines. Four slits are etched into the ground plane for reducing the mutual coupling. The measured results show that the proposed antenna has good impedance matching, isolation, peak gain, and radiation patterns. The radiation efficiency and diversity gain (DG) in the servicing frequencies are pretty well. In the Ericsson indoor experiment, three kinds of antenna feed systems are discussed. The proposed antenna shows good performance in Long Term Evolution (LTE) reference signal receiving power (RSRP), download speed, and upload speed.",
"title": ""
},
{
"docid": "010fd9fcd9afb973a1930fbb861654c9",
"text": "We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudorandom functions. Our result halves the signature size at the same security level, compared to previous results, which require a collision resistant hash function. We also consider security in the strong sense and show that the Winternitz one-time signature scheme is strongly unforgeable assuming additional properties of the pseudorandom function family. In this context we formally define several key-based security notions for function families and investigate their relation to pseudorandomness. All our reductions are exact and in the standard model and can directly be used to estimate the output length of the hash function required to meet a certain security level.",
"title": ""
},
{
"docid": "f5e8bb1c87513262f008c9c441fd44c6",
"text": "Recent work shows that offloading a mobile application from mobile devices to cloud servers can significantly reduce the energy consumption of mobile devices, thus extending the lifetime of mobile devices. However, previous work only considers the energy saving of mobile devices while ignoring the execution delay of mobile applications. To reduce the energy consumption of mobile devices, one may offload as many mobile applications as possible. However, offloading to cloud servers may incur a large execution delay because of the waiting time at the servers or the communication delay from the mobile devices to the servers. Thus, to balance the tradeoff between energy consumption and execution delay of mobile applications, it is necessary to determine whether the mobile application should be offloaded to the cloud server or run locally at the mobile devices. In this paper, we first formulate a joint optimization problem, which minimizes both the energy consumption at the mobile devices and the execution delay of mobile applications. We prove that the proposed problem is NP-hard. For a special case with unlimited residual energy at the mobile device and the same amount of resources required by each mobile application, we present a polynomial-time optimal solution. We also propose an efficient heuristic algorithm to solve the general case of the problem. Finally, simulation results demonstrate the effectiveness of the proposed scheme.",
"title": ""
},
{
"docid": "41ebdf724580830ce2c106ec0415912f",
"text": "Standard Multi-Armed Bandit (MAB) problems assume that the arms are independent. However, in many application scenarios, the information obtained by playing an arm provides information about the remainder of the arms. Hence, in such applications, this informativeness can and should be exploited to enable faster convergence to the optimal solution. In this paper, formalize a new class of multi-armed bandit methods, Global Multi-armed Bandit (GMAB), in which arms are globally informative through a global parameter, i.e., choosing an arm reveals information about all the arms. We propose a greedy policy for the GMAB which always selects the arm with the highest estimated expected reward, and prove that it achieves bounded parameter-dependent regret. Hence, this policy selects suboptimal arms only finitely many times, and after a finite number of initial time steps, the optimal arm is selected in all of the remaining time steps with probability one. In addition, we also study how the informativeness of the arms about each other’s rewards affects the speed of learning. Specifically, we prove that the parameter-free (worst-case) regret is sublinear in time, and decreases with the informativeness of the arms. We also prove a sublinear in time Bayesian risk bound for the GMAB which reduces to the well-known Bayesian risk bound for linearly parameterized bandits when the arms are fully informative. GMABs have applications ranging from drug dosage control to dynamic pricing. Appearing in Proceedings of the 18 International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38. Copyright 2015 by the authors.",
"title": ""
},
{
"docid": "8f4c629147db41356763de733aea618b",
"text": "The application of simulation software in the planning process is state-of-the-art at many railway infrastructure managers. On the one hand software tools are used to point out the demand for new infrastructure and on the other hand they are used to optimize traffic flow in railway networks by support of the time table related processes. This paper deals with the first application of the software tool called OPENTRACK for simulation of railway operation on an existing line in Croatia from Zagreb to Karlovac. Aim of the work was to find out if the actual version of OPENTRACK able to consider the Croatian signalling system. Therefore the capability arises to use it also for other investigations in railway operation.",
"title": ""
}
] |
scidocsrr
|
05553aea2fa2764e3185f4646bd87d13
|
The crying shame of robot nannies : an ethical appraisal
|
[
{
"docid": "1e5073e73c371f1682d95bb3eedaf7f4",
"text": "Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children-an ability that the current robot-assisted ASD intervention systems lack-to achieve effective interaction that addresses the role of affective states in human-robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing ldquounderstandingrdquo robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.",
"title": ""
}
] |
[
{
"docid": "ede0e47ee50f11096ce457adea6b4600",
"text": "Recent advances in hardware, software, and communication technologies are enabling the design and implementation of a whole range of different types of networks that are being deployed in various environments. One such network that has received a lot of interest in the last couple of S. Zeadally ( ) Network Systems Laboratory, Department of Computer Science and Information Technology, University of the District of Columbia, 4200, Connecticut Avenue, N.W., Washington, DC 20008, USA e-mail: [email protected] R. Hunt Department of Computer Science and Software Engineering, College of Engineering, University of Canterbury, Private Bag 4800, Christchurch, New Zealand e-mail: [email protected] Y.-S. Chen Department of Computer Science and Information Engineering, National Taipei University, 151, University Rd., San Shia, Taipei County, Taiwan e-mail: [email protected] Y.-S. Chen e-mail: [email protected] Y.-S. Chen e-mail: [email protected] A. Irwin School of Computer and Information Science, University of South Australia, Room F2-22a, Mawson Lakes, South Australia 5095, Australia e-mail: [email protected] A. Hassan School of Information Science, Computer and Electrical Engineering, Halmstad University, Kristian IV:s väg 3, 301 18 Halmstad, Sweden e-mail: [email protected] years is the Vehicular Ad-Hoc Network (VANET). VANET has become an active area of research, standardization, and development because it has tremendous potential to improve vehicle and road safety, traffic efficiency, and convenience as well as comfort to both drivers and passengers. Recent research efforts have placed a strong emphasis on novel VANET design architectures and implementations. A lot of VANET research work have focused on specific areas including routing, broadcasting, Quality of Service (QoS), and security. We survey some of the recent research results in these areas. We present a review of wireless access standards for VANETs, and describe some of the recent VANET trials and deployments in the US, Japan, and the European Union. In addition, we also briefly present some of the simulators currently available to VANET researchers for VANET simulations and we assess their benefits and limitations. Finally, we outline some of the VANET research challenges that still need to be addressed to enable the ubiquitous deployment and widespead adoption of scalable, reliable, robust, and secure VANET architectures, protocols, technologies, and services.",
"title": ""
},
{
"docid": "f78534a09317be5097963d068c6af2cd",
"text": "Example-based single image super-resolution (SISR) methods use external training datasets and have recently attracted a lot of interest. Self-example based SISR methods exploit redundant non-local self-similar patterns in natural images and because of that are more able to adapt to the image at hand to generate high quality super-resolved images. In this paper, we propose to combine the advantages of example-based SISR and self-example based SISR. A novel hierarchical random forests based super-resolution (SRHRF) method is proposed to learn statistical priors from external training images. Each layer of random forests reduce the estimation error due to variance by aggregating prediction models from multiple decision trees. The hierarchical structure further boosts the performance by pushing the estimation error due to bias towards zero. In order to further adaptively improve the super-resolved image, a self-example random forests (SERF) is learned from an image pyramid pair constructed from the down-sampled SRHRF generated result. Extensive numerical results show that the SRHRF method enhanced using SERF (SRHRF+) achieves the state-of-the-art performance on natural images and yields substantially superior performance for image with rich self-similar patterns.",
"title": ""
},
{
"docid": "71e275e9bb796bda3279820bfdd1dafb",
"text": "Alex M. Brooks Doctor of Philosophy The University of Sydney January 2007 Parametric POMDPs for Planning in Continuous State Spaces This thesis is concerned with planning and acting under uncertainty in partially-observable continuous domains. In particular, it focusses on the problem of mobile robot navigation given a known map. The dominant paradigm for robot localisation is to use Bayesian estimation to maintain a probability distribution over possible robot poses. In contrast, control algorithms often base their decisions on the assumption that a single state, such as the mode of this distribution, is correct. In scenarios involving significant uncertainty, this can lead to serious control errors. It is generally agreed that the reliability of navigation in uncertain environments would be greatly improved by the ability to consider the entire distribution when acting, rather than the single most likely state. The framework adopted in this thesis for modelling navigation problems mathematically is the Partially Observable Markov Decision Process (POMDP). An exact solution to a POMDP problem provides the optimal balance between reward-seeking behaviour and information-seeking behaviour, in the presence of sensor and actuation noise. Unfortunately, previous exact and approximate solution methods have had difficulty scaling to real applications. The contribution of this thesis is the formulation of an approach to planning in the space of continuous parameterised approximations to probability distributions. Theoretical and practical results are presented which show that, when compared with similar methods from the literature, this approach is capable of scaling to larger and more realistic problems. In order to apply the solution algorithm to real-world problems, a number of novel improvements are proposed. Specifically, Monte Carlo methods are employed to estimate distributions over future parameterised beliefs, improving planning accuracy without a loss of efficiency. Conditional independence assumptions are exploited to simplify the problem, reducing computational requirements. Scalability is further increased by focussing computation on likely beliefs, using metric indexing structures for efficient function approximation. Local online planning is incorporated to assist global offline planning, allowing the precision of the latter to be decreased without adversely affecting solution quality. Finally, the algorithm is implemented and demonstrated during real-time control of a mobile robot in a challenging navigation task. We argue that this task is substantially more challenging and realistic than previous problems to which POMDP solution methods have been applied. Results show that POMDP planning, which considers the evolution of the entire probability distribution over robot poses, produces significantly more robust behaviour when compared with a heuristic planner which considers only the most likely states and outcomes.",
"title": ""
},
{
"docid": "919ee3a62e28c1915d0be556a2723688",
"text": "Bayesian data analysis includes but is not limited to Bayesian inference (Gelman et al., 2003; Kerman, 2006a). Here, we take Bayesian inference to refer to posterior inference (typically, the simulation of random draws from the posterior distribution) given a fixed model and data. Bayesian data analysis takes Bayesian inference as a starting point but also includes fitting a model to different datasets, altering a model, performing inferential and predictive summaries (including prior or posterior predictive checks), and validation of the software used to fit the model. The most general programs currently available for Bayesian inference are WinBUGS (BUGS Project, 2004) and OpenBugs, which can be accessed from R using the packages R2WinBUGS (Sturtz et al., 2005) and BRugs. In addition, various R packages exist that directly fit particular Bayesian models (e.g. MCMCPack, Martin and Quinn (2005)). In this note, we describe our own entry in the “inference engine” sweepstakes but, perhaps more importantly, describe the ongoing development of some R packages that perform other aspects of Bayesian data analysis.",
"title": ""
},
{
"docid": "40ec8caea52ba75a6ad1e100fb08e89a",
"text": "Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat.",
"title": ""
},
{
"docid": "56f7f00d4711289dfc86785f5251c0d1",
"text": "LSM-tree has been widely used in data management production systems for write-intensive workloads. However, as read and write workloads co-exist under LSM-tree, data accesses can experience long latency and low throughput due to the interferences to buffer caching from the compaction, a major and frequent operation in LSM-tree. After a compaction, the existing data blocks are reorganized and written to other locations on disks. As a result, the related data blocks that have been loaded in the buffer cache are invalidated since their referencing addresses are changed, causing serious performance degradations. In order to re-enable high-speed buffer caching during intensive writes, we propose Log-Structured buffered-Merge tree (simplified as LSbM-tree) by adding a compaction buffer on disks, to minimize the cache invalidations on buffer cache caused by compactions. The compaction buffer efficiently and adaptively maintains the frequently visited data sets. In LSbM, strong locality objects can be effectively kept in the buffer cache with minimum or without harmful invalidations. With the help of a small on-disk compaction buffer, LSbM achieves a high query performance by enabling effective buffer caching, while retaining all the merits of LSM-tree for write-intensive data processing, and providing high bandwidth of disks for range queries. We have implemented LSbM based on LevelDB. We show that with a standard buffer cache and a hard disk, LSbM can achieve 2x performance improvement over LevelDB. We have also compared LSbM with other existing solutions to show its strong effectiveness.",
"title": ""
},
{
"docid": "e6d4d23df1e6d21bd988ca462526fe15",
"text": "Reinforcement learning, driven by reward, addresses tasks by optimizing policies for expected return. Need the supervision be so narrow? Reward is delayed and sparse for many tasks, so we argue that reward alone is a difficult and impoverished signal for end-to-end optimization. To augment reward, we consider a range of self-supervised tasks that incorporate states, actions, and successors to provide auxiliary losses. These losses offer ubiquitous and instantaneous supervision for representation learning even in the absence of reward. While current results show that learning from reward alone is feasible, pure reinforcement learning methods are constrained by computational and data efficiency issues that can be remedied by auxiliary losses. Self-supervised pre-training improves the data efficiency and policy returns of end-to-end reinforcement learning.",
"title": ""
},
{
"docid": "db87b17e0fd3310fd462c725a5462e6a",
"text": "We present Selections, a new cryptographic voting protocol that is end-to-end verifiable and suitable for Internet voting. After a one-time in-person registration, voters can cast ballots in an arbitrary number of elections. We say a system provides over-the-shoulder coercionresistance if a voter can undetectably avoid complying with an adversary that is present during the vote casting process. Our system is the first in the literature to offer this property without the voter having to anticipate coercion and precompute values. Instead, a voter can employ a panic password. We prove that Selections is coercion-resistant against a non-adaptive adversary. 1 Introductory Remarks From a security perspective, the use of electronic voting machines in elections around the world continues to be concerning. In principle, many security issues can be allayed with cryptography. While cryptographic voting has not seen wide deployment, refined systems like Prêt à Voter [11,28] and Scantegrity II [9] are representative of what is theoretically possible, and have even seen some use in governmental elections [7]. Today, a share of the skepticism over electronic elections is being apportioned to Internet voting.1 Many nation-states are considering, piloting or using Internet voting in elections. In addition to the challenges of verifiability and ballot secrecy present in any voting system, Internet voting adds two additional constraints: • Untrusted platforms: voters should be able to reliably cast secret ballots, even when their devices may leak information or do not function correctly. • Unsupervised voting: coercers or vote buyers should not be able to exert undue influence over voters despite the open environment of Internet voting. As with electronic voting, cryptography can assist in addressing these issues. The study of cryptographic Internet voting is not as mature. Most of the literature concentrates on only one of the two problems (see related work in Section 1.2). In this paper, we are concerned with the unsupervised voting problem. Informally, a system that solves it is said to be coercion-resistant. Full version available: http://eprint.iacr.org/2011/166 1 One noted cryptographer, Ronald Rivest, infamously opined that “best practices for Internet voting are like best practices for drunk driving” [25]. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 47–61, 2012. c © Springer-Verlag Berlin Heidelberg 2012 48 J. Clark and U. Hengartner",
"title": ""
},
{
"docid": "873bb52a5fe57335c30a0052b5bde4af",
"text": "Firth and Wagner (1997) questioned the dichotomies nonnative versus native speaker, learner versus user , and interlanguage versus target language , which reflect a bias toward innateness, cognition, and form in language acquisition. Research on lingua franca English (LFE) not only affirms this questioning, but reveals what multilingual communities have known all along: Language learning and use succeed through performance strategies, situational resources, and social negotiations in fluid communicative contexts. Proficiency is therefore practicebased, adaptive, and emergent. These findings compel us to theorize language acquisition as multimodal, multisensory, multilateral, and, therefore, multidimensional. The previously dominant constructs such as form, cognition, and the individual are not ignored; they get redefined as hybrid, fluid, and situated in a more socially embedded, ecologically sensitive, and interactionally open model.",
"title": ""
},
{
"docid": "0ee97a3afcc2471a05924a1171ac82cf",
"text": "A number of researchers around the world have built machines that recognize, express, model, communicate, and respond to emotional information, instances of ‘‘affective computing.’’ This article raises and responds to several criticisms of affective computing, articulating state-of-the art research challenges, especially with respect to affect in humancomputer interaction. r 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7cc3d7722f978545a6735ae4982ffc62",
"text": "A multiband printed monopole slot antenna promising for operating as an internal antenna in the thin-profile laptop computer for wireless wide area network (WWAN) operation is presented. The proposed antenna is formed by three monopole slots operated at their quarter-wavelength modes and arranged in a compact planar configuration. A step-shaped microstrip feedline is applied to excite the three monopole slots at their respective optimal feeding position, and two wide operating bands at about 900 and 1900 MHz are obtained for the antenna to cover all the five operating bands of GSM850/900/1800/1900/UMTS for WWAN operation. The antenna is easily printed on a small-size FR4 substrate and shows a length of 60 mm only and a height of 12 mm when mounted at the top edge of the system ground plane or supporting metal frame of the laptop display. Details of the proposed antenna are presented and studied.",
"title": ""
},
{
"docid": "a74b091706f4aeb384d2bf3d477da67d",
"text": "Amazon's Echo and its conversational agent Alexa open exciting opportunities for understanding how people perceive and interact with virtual agents. Drawing from user reviews of the Echo posted to Amazon.com, this case study explores the degree to which user reviews indicate personification of the device, sociability level of interactions, factors linked with personification, and influences on user satisfaction. Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns. Degree of device personification is linked with sociability of interactions: greater personification co-occurs with more social interactions with the Echo. Reviewers mentioning multiple member households are more likely to personify the device than reviewers mentioning living alone. Even after controlling for technical issues, personification predicts user satisfaction with the Echo.",
"title": ""
},
{
"docid": "a4da82c9c98203810cdfcf5c1a2c7f0a",
"text": "Software producing organizations are frequently judged by others for being ‘open’ or ‘closed’, where a more ‘closed’ organization is seen as being detrimental to its software ecosystem. These qualifications can harm the reputation of these companies, for they are deemed to promote vendor lock-in, use closed data formats, and are seen as using intellectual property laws to harm others. These judgements, however, are frequently based on speculation and the need arises for a method to establish openness of an organization, such that decisions are no longer based on prejudices, but on an objective assessment of the practices of a software producing organization. In this article the open software enterprise model is presented that roduct software vendors",
"title": ""
},
{
"docid": "f94ba438b2c5079069c25602c57ef705",
"text": "Search with local intent is becoming increasingly useful due to the popularity of the mobile device. The creation and maintenance of accurate listings of local businesses world wide is time consuming and expensive. In this paper, we propose an approach to automatically discover businesses that are visible on street level imagery. Precise business store-front detection enables accurate geo-location of bu sinesses, and further provides input for business categoriza tion, listing generation,etc. The large variety of business categories in different countries makes this a very challen ging problem. Moreover, manual annotation is prohibitive due to the scale of this problem. We propose the use of a MultiBox [4] based approach that takes input image pixels and directly outputs store front bounding boxes. This end-to-end learning approach instead preempts the need for hand modelling either the proposal generation phase or the post-processing phase, leveraging large labelled trai ning datasets. We demonstrate our approach outperforms the state of the art detection techniques with a large margin in terms of performance and run-time efficiency. In the evaluation, we show this approach achieves human accuracy in the low-recall settings. We also provide an end-to-end eval uation of business discovery in the real world.",
"title": ""
},
{
"docid": "c61e5bae4dbccf0381269980a22f726a",
"text": "—Web mining is the application of the data mining which is useful to extract the knowledge. Web mining has been explored to different techniques have been proposed for the variety of the application. Most research on Web mining has been from a 'data-centric' or information based point of view. Web usage mining, Web structure mining and Web content mining are the types of Web mining. Web usage mining is used to mining the data from the web server log files. Web Personalization is one of the areas of the Web usage mining that can be defined as delivery of content tailored to a particular user or as personalization requires implicitly or explicitly collecting visitor information and leveraging that knowledge in your content delivery framework to manipulate what information you present to your users and how you present it. In this paper, we have focused on various Web personalization categories and their research issues.",
"title": ""
},
{
"docid": "57d40d18977bc332ba16fce1c3cf5a66",
"text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.",
"title": ""
},
{
"docid": "cb3f1598c2769b373a20b4dddd8b35ea",
"text": "An image hash should be (1) robust to allowable operations and (2) sensitive to illegal manipulations and distinct queries. Some applications also require the hash to be able to localize image tampering. This requires the hash to contain both robust content and alignment information to meet the above criterion. Fulfilling this is difficult because of two contradictory requirements. First, the hash should be small and second, to verify authenticity and then localize tampering, the amount of information in the hash about the original required would be large. Hence a tradeoff between these requirements needs to be found. This paper presents an image hashing method that addresses this concern, to not only detect but also localize tampering using a small signature (< 1kB). Illustrative experiments bring out the efficacy of the proposed method compared to existing methods.",
"title": ""
},
{
"docid": "4a7a5f8ceb87e3a551e2ea561af9a757",
"text": "A special type of representation for knots and for local knot manipulations is described and used in a software tool called TOK to implement a number of algorithms on knots. Two algorithms for knot simplification are described: simulated annealing applied to the knot representation, and a “divide-simplify-join” algorithm. Both of these algorithms make use of the compact knot representation and of the basic mechanism TOK provides for carrying out a predefined knot manipulation on the knot representation. The simplification algorithms implemented with the TOK system exploit local knot manipulations and have proven themselves effective for simplifying even very complicated knots in reasonable time. Introduction What is Knot Theory? Knots are very complicated mathematical objects that have intuitive, real-world counterparts. This makes them very interesting to study. A tangle in a (frictionless ) rope is a knot if when the ends of the rope are pulled in opposite directions, the tangle is not unraveled. Given a pile of rope with two ends sticking out, it is difficult, or even impossible to say by inspection whether or not the rope is truly knotted. An even more difficult problem is to decide if two piles of tangled rope are equivalent; meaning that one pile may be stretched and deformed to look like the other pile without tearing the rope. Figure 1 illustrates that equivalence is sometimes not obvious even for simple knots. Figure 1. (a) Two trefoil knots (b) Two trivial knots Knot theory studies an abstraction of the intuitive “knot on a rope” notion. The theory deals with questions such as proving knottedness, and classifying types of knottedness. In a more abstract sense we may say that knot theory studies the placement problem: “Given spaces X andY, classify howX may be placed inY” . Here a placement is usually an embedding, and classification often means up to some form of movement. In these terms classical knot theory studies embeddings of a circle in Euclidean three space. (Hence we consider the two ends of the rope tied together) There are two main schools in knot theory research. The first is called combinatorial or pictorial knot theory. Here the main idea is to associate with the mathematical object a drawing that represents the knot, and to study various combinatorical properties of this drawing. The second school considers the abstract notion of a knot as an embedding and studies the topology of the so called complementary space of the image of the embedding, by applying to this space the tools of Algebraic Topology. This paper dwells in the first realm pictorial knot theory. Following is a brief description of the basic theory that is needed to understand the TOK knot manipulation tool. For a more comprehensive overview see [1][2][3]. * Electrical Engineering Department Technion, 3200 Haifa, Israel [email protected] [email protected] ** Computer Science Department Technion, 3200 Haifa, Israel (on sabbatical at AT&T Bell Laboratories, Murray Hill, NJ 07974, USA)",
"title": ""
},
{
"docid": "5bf4bd07293719d980667ad46ccef2f2",
"text": "Proposed in this paper is an efficient algorithm to remove self-intersections from the raw offset triangular mesh. The resulting regular mesh can be used in shape inflation, tool path generation, and process planning to name a few. Objective is to find the valid region set of triangles defining the outer boundary of the offset volume from the raw offset triangular mesh. Starting with a seed triangle, the algorithm grows the valid region to neighboring triangles until it reaches triangles with self-intersection. Then the region growing process crosses over the self-intersection and moves to the adjacent valid triangle. Therefore the region growing traverses valid triangles and intersecting triangles adjacent to valid triangles only. This property makes the algorithm efficient and robust, since this method omits unnecessary traversing invalid region, which usually has very complex geometric shape and contains many meaningless self-intersections.",
"title": ""
}
] |
scidocsrr
|
f9418cde5ef0bd8f7c6918aeb383b980
|
Explainable Entity-based Recommendations with Knowledge Graphs
|
[
{
"docid": "f4279617b00651e62477e42357666fbe",
"text": "Many information-management tasks (including classification, retrieval, information extraction, and information integration) can be formalized as inference in an appropriate probabilistic first-order logic. However, most probabilistic first-order logics are not efficient enough for realistically-sized instances of these tasks. One key problem is that queries are typically answered by \"grounding\" the query---i.e., mapping it to a propositional representation, and then performing propositional inference---and with a large database of facts, groundings can be very large, making inference and learning computationally expensive. Here we present a first-order probabilistic language which is well-suited to approximate \"local\" grounding: in particular, every query $Q$ can be approximately grounded with a small graph. The language is an extension of stochastic logic programs where inference is performed by a variant of personalized PageRank. Experimentally, we show that the approach performs well on an entity resolution task, a classification task, and a joint inference task; that the cost of inference is independent of database size; and that speedup in learning is possible by multi-threading.",
"title": ""
}
] |
[
{
"docid": "d1c14bf02205c9a37761d56a6d88e01e",
"text": "BACKGROUND\nSchizophrenia is a high-cost, chronic, serious mental illness. There is a clear need to improve treatments and expand access to care for persons with schizophrenia, but simple, tailored interventions are missing.\n\n\nOBJECTIVE\nTo evaluate the impact of tailored mobile telephone text messages to encourage adherence to medication and to follow up with people with psychosis at 12 months.\n\n\nMETHODS\nMobile.Net is a pragmatic randomized trial with inpatient psychiatric wards allocated to two parallel arms. The trial will include 24 sites and 45 psychiatric hospital wards providing inpatient care in Finland. The participants will be adult patients aged 18-65 years, of either sex, with antipsychotic medication (Anatomical Therapeutic Chemical classification 2011) on discharge from a psychiatric hospital, who have a mobile phone, are able to use the Finnish language, and are able to give written informed consent to participate in the study. The intervention group will receive semiautomatic system (short message service [SMS]) messages after they have been discharged from the psychiatric hospital. Patients will choose the form, content, timing, and frequency of the SMS messages related to their medication, keeping appointments, and other daily care. SMS messages will continue to the end of the study period (12 months) or until participants no longer want to receive the messages. Patients will be encouraged to contact researchers if they feel that they need to adjust the message in any way. At all times, both groups will receive usual care at the discretion of their team (psychiatry and nursing). The primary outcomes are service use and healthy days by 12 months based on routine data (admission to a psychiatric hospital, time to next hospitalization, time in hospital during this year, and healthy days). The secondary outcomes are service use, coercive measures, medication, adverse events, satisfaction with care, the intervention, and the trial, social functioning, and economic factors. Data will be collected 12 months after baseline. The outcomes are based on the national health registers and patients' subjective evaluations. The primary analysis will be by intention-to-treat.\n\n\nTRIAL REGISTRATION\nInternational Standard Randomised Controlled Trial Number (ISRCTN): 27704027; http://www.controlled-trials.com/ISRCTN27704027 (Archived by WebCite at http://www.webcitation.org/69FkM4vcq).",
"title": ""
},
{
"docid": "580c53294eed52453db7534da5db4985",
"text": "Face recognition with variant pose, illumination and expression (PIE) is a challenging problem. In this paper, we propose an analysis-by-synthesis framework for face recognition with variant PIE. First, an efficient 2D-to-3D integrated face reconstruction approach is introduced to reconstruct a personalized 3D face model from a single frontal face image with neutral expression and normal illumination; Then, realistic virtual faces with different PIE are synthesized based on the personalized 3D face to characterize the face subspace; Finally, face recognition is conducted based on these representative virtual faces. Compared with other related work, this framework has following advantages: 1) only one single frontal face is required for face recognition, which avoids the burdensome enrollment work; 2) the synthesized face samples provide the capability to conduct recognition under difficult conditions like complex PIE; and 3) compared with other 3D reconstruction approaches, our proposed 2D-to-3D integrated face reconstruction approach is fully automatic and more efficient. The extensive experimental results show that the synthesized virtual faces significantly improve the accuracy of face recognition with changing PIE.",
"title": ""
},
{
"docid": "a627229c79eeac473f151a33e19b8747",
"text": "Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset1, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated.",
"title": ""
},
{
"docid": "90a1fc43ee44634bce3658463503994e",
"text": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD are redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during this compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270× to 600× without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.",
"title": ""
},
{
"docid": "154102580cdcc7ea75faa5aec88d50f9",
"text": "A deliberate falsehood intentionally fabricated to appear as the truth, or often called as hoax (hocus to trick) has been increasing at an alarming rate. This situation may cause restlessness/anxiety and panic in society. Even though hoaxes have no effect on threats, however, new perceptions can be spread that they can affect both the social and political conditions. Imagery blown from hoaxes can bring negative effects and intervene state policies that may decrease the economy. An early detection on hoaxes helps the Government to reduce and even eliminate the spread. There are some system that filter hoaxes based on title and also from voting processes from searching processes in a search engine. This research develops Indonesian hoax filter based on text vector representation based on Term Frequency and Document Frequency as well as classification techniques. There are several classification techniques and for this research, Support Vector Machine and Stochastic Gradient Descent are chosen. Support Vector Machine divides a word vector using linear function and Stochastic Gradient Descent divides a word vector using nonlinear function. SVM and SGD are chosen because the characteristic of text classification includes multidimensional matrixes. Each word in news articles can be modeled as feature and with Linear SVC and SGD, the feature of word vector can be reduced into two dimensions and can be separated using linear and non-linear lines. The highest accuracy obtained from SGD classifier using modifled-huber is 86% over 100 hoax and 100 nonhoax websites which are randomly chosen outside dataset which are used in the training process.",
"title": ""
},
{
"docid": "4f557240199e1847747bb13745fc9717",
"text": "BACKGROUND\nFew studies compare instructor-modeled learning with modified debriefing to self-directed learning with facilitated debriefing during team-simulated clinical scenarios.\n\n\nOBJECTIVE\n: To determine whether self-directed learning with facilitated debriefing during team-simulated clinical scenarios (group A) has better outcomes compared with instructor-modeled learning with modified debriefing (group B).\n\n\nMETHODS\nThis study used a convenience sample of students. The four tools used assessed pre/post knowledge, satisfaction, technical, and team behaviors. Thirteen interdisciplinary student teams participated: seven in group A and six in group B. Student teams consisted of one nurse practitioner student, one registered nurse student, one social work student, and one respiratory therapy student. The Knowledge Assessment Tool was analyzed by student profession.\n\n\nRESULTS\nThere were no statistically significant differences within each student profession group on the Knowledge Assessment Tool. Group B was significantly more satisfied than group A (P = 0.01). Group B registered nurses and social worker students were significantly more satisfied than group A (30.0 +/- 0.50 vs. 26.2 +/- 3.0, P = 0.03 and 28.0 +/- 2.0 vs. 24.0 +/- 3.3, P = 0.04, respectively). Group B had significantly better scores than group A on 8 of the 11 components of the Technical Evaluation Tool; group B intervened more quickly. Group B had significantly higher scores on 8 of 10 components of the Behavioral Assessment Tool and overall team scores.\n\n\nCONCLUSION\nThe data suggest that instructor-modeling learning with modified debriefing is more effective than self-directed learning with facilitated debriefing during team-simulated clinical scenarios.",
"title": ""
},
{
"docid": "d043a086f143c713e4c4e74c38e3040c",
"text": "Background: The NASA Metrics Data Program data sets have been heavily used in software defect prediction experiments. Aim: To demonstrate and explain why these data sets require significant pre-processing in order to be suitable for defect prediction. Method: A meticulously documented data cleansing process involving all 13 of the original NASA data sets. Results: Post our novel data cleansing process; each of the data sets had between 6 to 90 percent less of their original number of recorded values. Conclusions: One: Researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Two: Defect prediction data sets could benefit from lower level code metrics in addition to those more commonly used, as these will help to distinguish modules, reducing the likelihood of repeated data points. Three: The bulk of defect prediction experiments based on the NASA Metrics Data Program data sets may have led to erroneous findings. This is mainly due to repeated data points potentially causing substantial amounts of training and testing data to be identical.",
"title": ""
},
{
"docid": "13ac8eddda312bd4ef3ba194c076a6ea",
"text": "With the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset, a novel dataset was introduced to the computer vision and multimedia research community. To maximize the benefit for the research community and utilize its potential, this dataset has to be made accessible by tools allowing to search for target concepts within the dataset and mechanism to browse images and videos of the dataset. Following best practice from data collections, such as ImageNet and MS COCO, this paper presents means of accessibility for the YFCC100m dataset. This includes a global analysis of the dataset and an online browser to explore and investigate subsets of the dataset in real-time. Providing statistics of the queried images and videos will enable researchers to refine their query successively, such that the users desired subset of interest can be narrowed down quickly. The final set of image and video can be downloaded as URLs from the browser for further processing.",
"title": ""
},
{
"docid": "a11cb4801585804f08fa55ec40f13925",
"text": "It is well-known that conventional field effect transistors (FETs) require a change in the channel potential of at least 60 mV at 300 K to effect a change in the current by a factor of 10, and this minimum subthreshold slope S puts a fundamental lower limit on the operating voltage and hence the power dissipation in standard FET-based switches. Here, we suggest that by replacing the standard insulator with a ferroelectric insulator of the right thickness it should be possible to implement a step-up voltage transformer that will amplify the gate voltage thus leading to values of S lower than 60 mV/decade and enabling low voltage/low power operation. The voltage transformer action can be understood intuitively as the result of an effective negative capacitance provided by the ferroelectric capacitor that arises from an internal positive feedback that in principle could be obtained from other microscopic mechanisms as well. Unlike other proposals to reduce S, this involves no change in the basic physics of the FET and thus does not affect its current drive or impose other restrictions.",
"title": ""
},
{
"docid": "171f84938f8788e293d763fccc8b3c27",
"text": "Google ads, black names and white names, racial discrimination, and click advertising",
"title": ""
},
{
"docid": "4791b04d1cafd0b4a59bbfbec50ace38",
"text": "The current paper proposes a slack-based version of the Super SBM, which is an alternative superefficiency model for the SBM proposed by Tone. Our two-stage approach provides the same superefficiency score as that obtained by the Super SBM model when the evaluated DMU is efficient and yields the same efficiency score as that obtained by the SBM model when the evaluated DMU is inefficient. The projection identified by the Super SBM model may not be strongly Pareto efficient; however, the projection identified from our approach is strongly Pareto efficient. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "22d233c7f0916506d2fc23b3a8ef4633",
"text": "CD69 is a type II C-type lectin involved in lymphocyte migration and cytokine secretion. CD69 expression represents one of the earliest available indicators of leukocyte activation and its rapid induction occurs through transcriptional activation. In this study we examined the molecular mechanism underlying mouse CD69 gene transcription in vivo in T and B cells. Analysis of the 45-kb region upstream of the CD69 gene revealed evolutionary conservation at the promoter and at four noncoding sequences (CNS) that were called CNS1, CNS2, CNS3, and CNS4. These regions were found to be hypersensitive sites in DNase I digestion experiments, and chromatin immunoprecipitation assays showed specific epigenetic modifications. CNS2 and CNS4 displayed constitutive and inducible enhancer activity in transient transfection assays in T cells. Using a transgenic approach to test CNS function, we found that the CD69 promoter conferred developmentally regulated expression during positive selection of thymocytes but could not support regulated expression in mature lymphocytes. Inclusion of CNS1 and CNS2 caused suppression of CD69 expression, whereas further addition of CNS3 and CNS4 supported developmental-stage and lineage-specific regulation in T cells but not in B cells. We concluded CNS1-4 are important cis-regulatory elements that interact both positively and negatively with the CD69 promoter and that differentially contribute to CD69 expression in T and B cells.",
"title": ""
},
{
"docid": "e28b0ab1bedd60ba83b8a575431ad549",
"text": "The Decision Model and Notation (DMN) is a standard notation to specify decision logic in business applications. A central construct in DMN is a decision table. The rising use of DMN decision tables to capture and to automate everyday business decisions fuels the need to support analysis tasks on decision tables. This paper presents an opensource DMN editor to tackle three analysis tasks: detection of overlapping rules, detection of missing rules and simplification of decision tables via rule merging. The tool has been tested on large decision tables derived from a credit lending data-set.",
"title": ""
},
{
"docid": "ad9f3510ffaf7d0bdcf811a839401b83",
"text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.",
"title": ""
},
{
"docid": "f17b3a6c31daeee0ae0a8ebc7a14e16c",
"text": "In full-duplex (FD) radios, phase noise leads to random phase mismatch between the self-interference (SI) and the reconstructed cancellation signal, resulting in possible performance degradation during SI cancellation. To explicitly analyze its impacts on the digital SI cancellation, an orthogonal frequency division multiplexing (OFDM)-modulated FD radio is considered with phase noises at both the transmitter and receiver. The closed-form expressions for both the digital cancellation capability and its limit for the large interference-to-noise ratio (INR) case are derived in terms of the power of the common phase error, INR, desired signal-to-noise ratio (SNR), channel estimation error and transmission delay. Based on the obtained digital cancellation capability, the achievable rate region of a two-way FD OFDM system with phase noise is characterized. Then, with a limited SI cancellation capability, the maximum outer bound of the rate region is proved to exist for sufficiently large transmission power. Furthermore, a minimum transmission power is obtained to achieve $\\beta$ -portion of the cancellation capability limit and to ensure that the outer bound of the rate region is close to its maximum.",
"title": ""
},
{
"docid": "0bb733101c73757457a516e9499bd303",
"text": "Modulation is a key feature commonly used in wireless communication for data transmission and to minimize antenna design. QPSK (Quadrature Phase Shift Keying) is one type of digital modulation technique used to transfer the baseband data wirelessly in much efficient way compare to other modulation techniques. Conventional QPSK modulator operates by separation of baseband data into i and q phases and then add them to produce QPSK signal. The process of generating sine and cosine carrier wave to produce the i and q phases consume high power. For better efficiency in power consumption and area utilization, 2 new types of QPSK modulator proposed. The proposed method will eliminate the generation of 2 phases and will produce the QPSK output based on stored data in RAM. Verilog HDL used to implement the proposed QPSK modulators and it has been successfully simulated on Xilinx ISE 12.4 software platform. a comparision has been made with existing modulator and significant improvement can be seen in term of area and power consumption.",
"title": ""
},
{
"docid": "419499ced8902a00909c32db352ea7f5",
"text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.",
"title": ""
},
{
"docid": "b4714cacd13600659e8a94c2b8271697",
"text": "AIM AND OBJECTIVE\nExamine the pharmaceutical qualities of cannabis including a historical overview of cannabis use. Discuss the use of cannabis as a clinical intervention for people experiencing palliative care, including those with life-threatening chronic illness such as multiple sclerosis and motor neurone disease [amyotrophic lateral sclerosis] in the UK.\n\n\nBACKGROUND\nThe non-medicinal use of cannabis has been well documented in the media. There is a growing scientific literature on the benefits of cannabis in symptom management in cancer care. Service users, nurses and carers need to be aware of the implications for care and treatment if cannabis is being used medicinally.\n\n\nDESIGN\nA comprehensive literature review.\n\n\nMETHOD\nLiterature searches were made of databases from 1996 using the term cannabis and the combination terms of cannabis and palliative care; symptom management; cancer; oncology; chronic illness; motor neurone disease/amyotrophic lateral sclerosis; and multiple sclerosis. Internet material provided for service users searching for information about the medicinal use of cannabis was also examined.\n\n\nRESULTS\nThe literature on the use of cannabis in health care repeatedly refers to changes for users that may be equated with improvement in quality of life as an outcome of its use. This has led to increased use of cannabis by these service users. However, the cannabis used is usually obtained illegally and can have consequences for those who choose to use it for its therapeutic value and for nurses who are providing care.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nQuestions and dilemmas are raised concerning the role of the nurse when caring and supporting a person making therapeutic use of cannabis.",
"title": ""
},
{
"docid": "46c21e8958816112c24c8539cab3b23b",
"text": "Widely used in turbomachinery, the fluid film journal bearing is critical to a machine’s overall reliability level. Their design complexity and application severity continue to increase making it challenging for the plant machinery engineer to evaluate their reliability. This tutorial provides practical knowledge on their basic operation and what physical effects should be included in modeling a bearing to help ensure its reliable operation in the field. All the important theoretical aspects of journal bearing modeling, such as film pressure, film and pad temperatures, thermal and mechanical deformations, and turbulent flow are reviewed. Through some examples, the tutorial explores how different effects influence key performance characteristics like minimum film thickness, Babbitt temperature as well as stiffness and damping coefficients. Due to their increasing popularity, the operation and analysis of advanced designs using directed lubrication principles, such as inlet grooves and starvation, are also examined with several examples including comparisons to manufacturers’ test data. 155 FUNDAMENTALS OF FLUID FILM JOURNAL BEARING OPERATION AND MODELING",
"title": ""
},
{
"docid": "2f7ba7501fcf379b643867c7d5a9d7bf",
"text": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow-minimum-cut theorem.",
"title": ""
}
] |
scidocsrr
|
b4646066ae6b71d14754e70e7898bc5e
|
A High Accuracy Fuzzy Logic Based Map Matching Algorithm for Road Transport
|
[
{
"docid": "559637a4f8f5b99bb3210c5c7d03d2e0",
"text": "Third-generation personal navigation assistants (PNAs) (i.e., those that provide a map, the user's current location, and directions) must be able to reconcile the user's location with the underlying map. This process is known as map matching. Most existing research has focused on map matching when both the user's location and the map are known with a high degree of accuracy. However, there are many situations in which this is unlikely to be the case. Hence, this paper considers map matching algorithms that can be used to reconcile inaccurate locational data with an inaccurate map/network. Ó 2000 Published by Elsevier Science Ltd.",
"title": ""
}
] |
[
{
"docid": "2b40c6f6a9fc488524c23e11cd57a00b",
"text": "An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.",
"title": ""
},
{
"docid": "56a35139eefd215fe83811281e4e2279",
"text": "Querying graph data is a fundamental problem that witnesses an increasing interest especially for massive graph databases which come as a promising alternative to relational databases for big data modeling. In this paper, we study the problem of subgraph isomorphism search which consists to enumerate the embedding of a query graph in a data graph. The most known solutions of this NPcomplete problem are backtracking-based and result in a high computational cost when we deal with massive graph databases. We address this problem and its challenges via graph compression with modular decomposition. In our approach, subgraph isomorphism search is performed on compressed graphs without decompressing them yielding substantial reduction of the search space and consequently a significant saving in processing time as well as in storage space for the graphs. We evaluated our algorithms on nine real-word datasets. The experimental results show that our approach is efficient and scalable. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "6886849300b597fdb179162744b40ee2",
"text": "This paper argues that the dominant study of the form and structure of games – their poetics – should be complemented by the analysis of their aesthetics (as understood by modern cultural theory): how gamers use their games, what aspects they enjoy and what kinds of pleasures they experience by playing them. The paper outlines a possible aesthetic theory of games based on different aspects of pleasure: the psychoanalytical, the social and the physical form of pleasure.",
"title": ""
},
{
"docid": "0865e75053efcc198c5855273de3f94c",
"text": "In this paper we present a low-cost and easy to fabricate 3-axis tactile sensor based on magnetic technology. The sensor consists in a small magnet immersed in a silicone body with an Hall-effect sensor placed below to detect changes in the magnetic field caused by displacements of the magnet, generated by an external force applied to the silicone body. The use of a 3-axis Hall-effect sensor allows to detect the three components of the force vector, and the proposed design assures high sensitivity, low hysteresis and good repeatability of the measurement: notably, the minimum sensed force is about 0.007N. All components are cheap and easy to retrieve and to assemble; the fabrication process is described in detail and it can be easily replicated by other researchers. Sensors with different geometries have been fabricated, calibrated and successfully integrated in the hand of the human-friendly robot Vizzy. In addition to the sensor characterization and validation, real world experiments of object manipulation are reported, showing proper detection of both normal and shear forces.",
"title": ""
},
{
"docid": "38c96356f5fd3daef5f1f15a32971b57",
"text": "Recommendation systems make suggestions about artifacts to a user. For instance, they may predict whether a user would be interested in seeing a particular movie. Social recomendation methods collect ratings of artifacts from many individuals and use nearest-neighbor techniques to make recommendations to a user concerning new artifacts. However, these methods do not use the significant amount of other information that is often available about the nature of each artifact -such as cast lists or movie reviews, for example. This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences. We show that our method outperforms an existing social-filtering method in the domain of movie recommendations on a dataset of more than 45,000 movie ratings collected from a community of over 250 users. Introduction Recommendations are a part of everyday life. We usually rely on some external knowledge to make informed decisions about a particular artifact or action, for instance when we are going to see a movie or going to see a doctor. This knowledge can be derived from social processes. At other times, our judgments may be based on available information about an artifact and our known preferences. There are many factors which may influence a person in making choices, and ideally one would like to model as many of these factors as possible in a recommendation system. There are some general approaches to this problem. In one approach, the user of the system provides ratings of some artifacts or items. The system makes informed guesses about other items the user may like based on ratings other users have provided. This is the framework for social-filtering methods (Hill, Stead, Rosenstein Furnas 1995; Shardanand & Maes 1995). In a second approach, the system accepts information describing the nature of an item, and based on a sample of the user’s preferences, learns to predict which items the user will like (Lang 1995; Pazzani, Muramatsu, & Billsus 1996). We will call this approach content-based filtering, as it does not rely on social information (in the form of other users’ ratings). Both social and content-based filtering can be cast as learning problems: the objective is to *Department of Computer Science, Rutgers University, Piscataway, NJ 08855 We would like to thank Susan Dumais for useful discussions during the early stages of this work. Copyright ~)1998, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. learn a function that can take a description of a user and an artifact and predict the user’s preferences concerning the artifact. Well-known recommendation systems like Recommender (Hill, Stead, Rosenstein & Furnas 1995) and Firefly (http: //www.firefly.net) (Shardanand & Maes 1995) are based on social-filtering principles. Recommender, the baseline system used in the work reported here, recommends as yet unseen movies to a user based on his prior ratings of movies and their similarity to the ratings of other users. Social-filtering systems perform well using only numeric assessments of worth, i.e., ratings. However, social-filtering methods leave open the question of what role content can play in the recommen-",
"title": ""
},
{
"docid": "a3dc6a178b7861959b992387366c2c78",
"text": "Linked data and semantic web technologies are gaining impact and importance in the Architecture, Engineering, Construction and Facility Management (AEC/FM) industry. Whereas we have seen a strong technological shift with the emergence of Building Information Modeling (BIM) tools, this second technological shift to the exchange and management of building data over the web might be even stronger than the first one. In order to make this a success, the AEC/FM industry will need strong and appropriate ontologies, as they will allow industry practitioners to structure their data in a commonly agreed format and exchange the data. Herein, we look at the ontologies that are emerging in the area of Building Automation and Control Systems (BACS). We propose a BACS ontology in strong alignment with existing ontologies and evaluate how it can be used for capturing automation and control systems of a building by modeling a use case.",
"title": ""
},
{
"docid": "09f033276a321fdb4635fe61de45f00d",
"text": "A 32-year-old woman, gravida 1, para 0, was referred for third-trimester sonography at 34 weeks’ gestation to evaluate fetal growth. Sonography revealed a female fetus with an echogenic, midline, nonvascular pelvic mass (Fig. 1, arrow) and no associated genitourinary abnormality. Differential diagnoses included an ovarian mass, distended rectum, hydrocolpos, vaginal atresia and urogenital sinus. Postnatal US revealed an echogenic, fluid-containing midline pelvic mass (Fig. 2, black arrow) in the setting of an imperforate hymen. The cervix is marked (double white arrows). The",
"title": ""
},
{
"docid": "83ed915556df1c00f6448a38fb3b7ec3",
"text": "Wandering liver or hepatoptosis is a rare entity in medical practice. It is also known as floating liver and hepatocolonic vagrancy. It describes the unusual finding of, usually through radiology, the alternate appearance of the liver on the right and left side, respectively. . The first documented case of wandering liver was presented by Heister in 1754 Two centuries later In 1958, Grayson recognized and described the association of wandering liver and tachycardia. In his paper, Grayson details the classical description of wandering liver documented by French in his index of differential diagnosis. In 2010 Jan F. Svensson et al described the first report of a wandering liver in a neonate, reviewed and a discussed the possible treatment strategies. When only displaced, it may wrongly be thought to be enlarged liver",
"title": ""
},
{
"docid": "826ad745258d73a9dc75c4d0938ae3bc",
"text": "Classification problems with a large number of classes inevitably involve overlapping or similar classes. In such cases it seems reasonable to allow the learning algorithm to make mistakes on similar classes, as long as the true class is still among the top-k (say) predictions. Likewise, in applications such as search engine or ad display, we are allowed to present k predictions at a time and the customer would be satisfied as long as her interested prediction is included. Inspired by the recent work of [15], we propose a very generic, robust multiclass SVM formulation that directly aims at minimizing a weighted and truncated combination of the ordered prediction scores. Our method includes many previous works as special cases. Computationally, using the Jordan decomposition Lemma we show how to rewrite our objective as the difference of two convex functions, based on which we develop an efficient algorithm that allows incorporating many popular regularizers (such as the l2 and l1 norms). We conduct extensive experiments on four real large-scale visual category recognition datasets, and obtain very promising performances.",
"title": ""
},
{
"docid": "6a2e6492695beab2c0a6d479bffd65e1",
"text": "Electroencephalogram (EEG) signal based emotion recognition, as a challenging pattern recognition task, has attracted more and more attention in recent years and widely used in medical, Affective Computing and other fields. Traditional approaches often lack of the high-level features and the generalization ability is poor, which are difficult to apply to the practical application. In this paper, we proposed a novel model for multi-subject emotion classification. The basic idea is to extract the high-level features through the deep learning model and transform traditional subject-independent recognition tasks into multi-subject recognition tasks. Experiments are carried out on the DEAP dataset, and our results demonstrate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "0e9e6c1f21432df9dfac2e7205105d46",
"text": "This paper summarises the COSET shared task organised as part of the IberEval workshop. The aim of this task is to classify the topic discussed in a tweet into one of five topics related to the Spanish 2015 electoral cycle. A new dataset was curated for this task and hand-labelled by experts on the task. Moreover, the results of the 17 participants of the task and a review of their proposed systems are presented. In a second phase evaluation, we provided the participants with 15.8 millions tweets in order to test the scalability of their systems.",
"title": ""
},
{
"docid": "ca2a5f0699d4746240376ad771f1af47",
"text": "Massively multiplayer online games (MMOGs) can be fascinating laboratories to observe group dynamics online. In particular, players must form persistent associations or \"guilds\" to coordinate their actions and accomplish the games' toughest objectives. Managing a guild, however, is notoriously difficult and many do not survive very long. In this paper, we examine some of the factors that could explain the success or failure of a game guild based on more than a year of data collected from five World of Warcraft servers. Our focus is on structural properties of these groups, as represented by their social networks and other variables. We use this data to discuss what games can teach us about group dynamics online and, in particular, what tools and techniques could be used to better support gaming communities.",
"title": ""
},
{
"docid": "a9de4aa3f0268f23d77f882425afbcd5",
"text": "This paper describes a CMOS-based time-of-flight depth sensor and presents some experimental data while addressing various issues arising from its use. Our system is a single-chip solution based on a special CMOS pixel structure that can extract phase information from the received light pulses. The sensor chip integrates a 64x64 pixel array with a high-speed clock generator and ADC. A unique advantage of the chip is that it can be manufactured with an ordinary CMOS process. Compared with other types of depth sensors reported in the literature, our solution offers significant advantages, including superior accuracy, high frame rate, cost effectiveness and a drastic reduction in processing required to construct the depth maps. We explain the factors that determine the resolution of our system, discuss various problems that a time-of-flight depth sensor might face, and propose practical solutions.",
"title": ""
},
{
"docid": "5b763dbb9f06ff67e44b5d38920e92bf",
"text": "With the growing popularity of the internet, everything is available at our doorstep and convenience. The rapid increase in e-commerce applications has resulted in the increased usage of the credit card for offline and online payments. Though there are various benefits of using credit cards such as convenience, instant cash, but when it comes to security credit card holders, banks, and the merchants are affected when the card is being stolen, lost or misused without the knowledge of the cardholder (Fraud activity). Streaming analytics is a time-based processing of data and it is used to enable near real-time decision making by inspecting, correlating and analyzing the data even as it is streaming into applications and database from myriad different sources. We are making use of streaming analytics to detect and prevent the credit card fraud. Rather than singling out specific transactions, our solution analyses the historical transaction data to model a system that can detect fraudulent patterns. This model is then used to analyze transactions in real-time.",
"title": ""
},
{
"docid": "e84ff3f37e049bd649a327366a4605f9",
"text": "Once thought of as a technology restricted primarily to the scientific community, High-performance Computing (HPC) has now been established as an important value creation tool for the enterprises. Predominantly, the enterprise HPC is fueled by the needs for high-performance data analytics (HPDA) and large-scale machine learning – trades instrumental to business growth in today’s competitive markets. Cloud computing, characterized by the paradigm of on-demand network access to computational resources, has great potential of bringing HPC capabilities to a broader audience. Clouds employing traditional lossy network technologies, however, at large, have not proved to be sufficient for HPC applications. Both the traditional HPC workloads and HPDA require high predictability, large bandwidths, and low latencies, features which combined are not readily available using best-effort cloud networks. On the other hand, lossless interconnection networks commonly deployed in HPC systems, lack the flexibility needed for dynamic cloud environments. In this thesis, we identify and address research challenges that hinder the realization of an efficient HPC cloud computing platform, utilizing the InfiniBand interconnect as a demonstration technology. In particular, we address challenges related to efficient routing, load-balancing, low-overhead virtualization, performance isolation, and fast network reconfiguration, all to improve the utilization and flexibility of the underlying interconnect of an HPC cloud. In addition, we provide a framework to realize a self-adaptive network architecture for HPC clouds, offering dynamic and autonomic adaptation of the underlying interconnect according to varying traffic patterns, resource availability, workload distribution, and also in accordance with service provider defined policies. The work presented in this thesis helps bridging the performance gap between the cloud and traditional HPC infrastructures; the thesis provides practical solutions to enable an efficient, flexible, multi-tenant HPC network suitable for high-performance cloud computing.",
"title": ""
},
{
"docid": "6f9be23c91dafa2cbc3f60a56a415c36",
"text": "Bayesian treatment of matrix factorization has been successfully applied to the problem of collaborative prediction, where unknown ratings are determined by the predictive distribution, inferring posterior distributions over user and item factor matrices that are used to approximate the user-item matrix as their product. In practice, however, Bayesian matrix factorization suffers from cold-start problems, where inferences are required for users or items about which a sufficient number of ratings are not gathered. In this paper we present a method for Bayesian matrix factorization with side information, to handle cold-start problems. To this end, we place Gaussian-Wishart priors on mean vectors and precision matrices of Gaussian user and item factor matrices, such that mean of each prior distribution is regressed on corresponding side information. We develop variational inference algorithms to approximately compute posterior distributions over user and item factor matrices. In addition, we provide Bayesian Cramér-Rao Bound for our model, showing that the hierarchical Bayesian matrix factorization with side information improves the reconstruction over the standard Bayesian matrix factorization where the side information is not used. Experiments on MovieLens data demonstrate the useful behavior of our model in the case of cold-start problems.",
"title": ""
},
{
"docid": "3668a5a14ea32471bd34a55ff87b45b5",
"text": "This paper proposes a method to separate polyphonic music signal into signals of each musical instrument by NMF: Non-negative Matrix Factorization based on preservation of spectrum envelope. Sound source separation is taken as a fundamental issue in music signal processing and NMF is becoming common to solve it because of its versatility and compatibility with music signal processing. Our method bases on a common feature of harmonic signal: spectrum envelopes of musical signal in close pitches played by the harmonic music instrument would be similar. We estimate power spectrums of each instrument by NMF with restriction to synchronize spectrum envelope of bases which are allocated to all possible center frequencies of each instrument. This manipulation means separation of components which refers to tones of each instrument and realizes both of separation without pre-training and separation of signal including harmonic and non-harmonic sound. We had an experiment to decompose mixture sound signal of MIDI instruments into each instrument and evaluated the result by SNR of single MIDI instrument sound signals and separated signals. As a result, SNR of lead guitar and drums approximately marked 3.6 and 6.0 dB and showed significance of our method.",
"title": ""
},
{
"docid": "5443a07fe5f020972cbdce8f5996a550",
"text": "The training of severely disabled individuals on the use of electric power wheelchairs creates many challenges, particularly in the case of children. The adjustment of equipment and training on a per-patient basis in an environment with limited specialists and resources often leads to a reduced amount of training time per patient. Virtual reality rehabilitation has recently been proven an effective way to supplement patient rehabilitation, although some important challenges remain including high setup/equipment costs and time-consuming continual adjustments to the simulation as patients improve. We propose a design for a flexible, low-cost rehabilitation system that uses virtual reality training and games to engage patients in effective instruction on the use of powered wheelchairs. We also propose a novel framework based on Bayesian networks for self-adjusting adaptive training in virtual rehabilitation environments. Preliminary results from a user evaluation and feedback from our rehabilitation specialist collaborators support the effectiveness of our approach.",
"title": ""
},
{
"docid": "bdae5947d44e14ba49ffa5b10e5345df",
"text": "As the technology is developing with a huge rate, the functionality of smartphone is also getting higher. But the smartphones have some resource constraints like processing power, battery capacity, limited bandwidth for connecting to the Internet, etc. Therefore, to improve the performance of smartphone in terms of processing power, battery and memory, the technology namely, augmented execution is the best solution in the mobile cloud computing (MCC) scenario. Mobile cloud computing works as the combination of mobile computing and cloud computing. Augmented execution alleviates the problem of resource scarcity of smartphone. To get the benefits from the resource-abundant clouds, massive computation intensive tasks are partitioned and migrated to the cloud side for the execution. After executing the task at the cloud side, the results are sent back to the smartphone. This method is called as the computation offloading. The given survey paper focuses on the partitioning techniques in mobile cloud computing.",
"title": ""
},
{
"docid": "62efd4c3e2edc5d8124d5c926484d79b",
"text": "OBJECTIVE\nResearch studies show that social media may be valuable tools in the disease surveillance toolkit used for improving public health professionals' ability to detect disease outbreaks faster than traditional methods and to enhance outbreak response. A social media work group, consisting of surveillance practitioners, academic researchers, and other subject matter experts convened by the International Society for Disease Surveillance, conducted a systematic primary literature review using the PRISMA framework to identify research, published through February 2013, answering either of the following questions: Can social media be integrated into disease surveillance practice and outbreak management to support and improve public health?Can social media be used to effectively target populations, specifically vulnerable populations, to test an intervention and interact with a community to improve health outcomes?Examples of social media included are Facebook, MySpace, microblogs (e.g., Twitter), blogs, and discussion forums. For Question 1, 33 manuscripts were identified, starting in 2009 with topics on Influenza-like Illnesses (n = 15), Infectious Diseases (n = 6), Non-infectious Diseases (n = 4), Medication and Vaccines (n = 3), and Other (n = 5). For Question 2, 32 manuscripts were identified, the first in 2000 with topics on Health Risk Behaviors (n = 10), Infectious Diseases (n = 3), Non-infectious Diseases (n = 9), and Other (n = 10).\n\n\nCONCLUSIONS\nThe literature on the use of social media to support public health practice has identified many gaps and biases in current knowledge. Despite the potential for success identified in exploratory studies, there are limited studies on interventions and little use of social media in practice. However, information gleaned from the articles demonstrates the effectiveness of social media in supporting and improving public health and in identifying target populations for intervention. A primary recommendation resulting from the review is to identify opportunities that enable public health professionals to integrate social media analytics into disease surveillance and outbreak management practice.",
"title": ""
}
] |
scidocsrr
|
0da49d505b8f9ae7159387be8707995b
|
Single Image Action Recognition Using Semantic Body Part Actions
|
[
{
"docid": "cf5829d1bfa1ae243bbf67776b53522d",
"text": "There are multiple cues in an image which reveal what action a person is performing. For example, a jogger has a pose that is characteristic for jogging, but the scene (e.g. road, trail) and the presence of other joggers can be an additional source of information. In this work, we exploit the simple observation that actions are accompanied by contextual cues to build a strong action recognition system. We adapt RCNN to use more than one region for classification while still maintaining the ability to localize the action. We call our system R*CNN. The action-specific models and the feature maps are trained jointly, allowing for action specific representations to emerge. R*CNN achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other approaches in the field by a significant margin. Last, we show that R*CNN is not limited to action recognition. In particular, R*CNN can also be used to tackle fine-grained tasks such as attribute classification. We validate this claim by reporting state-of-the-art performance on the Berkeley Attributes of People dataset.",
"title": ""
}
] |
[
{
"docid": "be3f18e5fbaf3ad45976ca867698a4bc",
"text": "Widespread adoption of internet technologies has changed the way that news is created and consumed. The current online news environment is one that incentivizes speed and spectacle in reporting, at the cost of fact-checking and verification. The line between user generated content and traditional news has also become increasingly blurred. This poster reviews some of the professional and cultural issues surrounding online news and argues for a two-pronged approach inspired by Hemingway’s “automatic crap detector” (Manning, 1965) in order to address these problems: a) proactive public engagement by educators, librarians, and information specialists to promote digital literacy practices; b) the development of automated tools and technologies to assist journalists in vetting, verifying, and fact-checking, and to assist news readers by filtering and flagging dubious information.",
"title": ""
},
{
"docid": "40e55e77a59e3ed63ae0a86b0c832f32",
"text": "Decision tree is an important method for both induction research and data mining, which is mainly used for model classification and prediction. ID3 algorithm is the most widely used algorithm in the decision tree so far. Through illustrating on the basic ideas of decision tree in data mining, in this paper, the shortcoming of ID3's inclining to choose attributes with many values is discussed, and then a new decision tree algorithm combining ID3 and Association Function(AF) is presented. The experiment results show that the proposed algorithm can overcome ID3's shortcoming effectively and get more reasonable and effective rules",
"title": ""
},
{
"docid": "c2d41a58c4c11dd65f5f8e5215be7655",
"text": "We present the task of second language acquisition (SLA) modeling. Given a history of errors made by learners of a second language, the task is to predict errors that they are likely to make at arbitrary points in the future. We describe a large corpus of more than 7M words produced by more than 6k learners of English, Spanish, and French using Duolingo, a popular online language-learning app. Then we report on the results of a shared task challenge aimed studying the SLA task via this corpus, which attracted 15 teams and synthesized work from various fields including cognitive science, linguistics, and machine learning.",
"title": ""
},
{
"docid": "cc7c3b21f189d53ba3525d02d95d25c9",
"text": "A polarization reconfigurable slot antenna with a novel coplanar waveguide (CPW)-to-slotline transition for wireless local area networks (WLANs) is proposed and tested. The antenna consists of a square slot, a reconfigurable CPW-to-slotline transition, and two p-i-n diodes. No extra matching structure is needed for modes transiting, which makes it much more compact than all reference designs. The -10 dB bandwidths of an antenna with an implemented bias circuit are 610 (25.4%) and 680 MHz (28.3%) for vertical and horizontal polarizations, respectively. The radiation pattern and gain of the proposed antenna are also tested, and the radiation pattern data were compared to simulation results.",
"title": ""
},
{
"docid": "798e7781345a88acdd2f3d388a03802d",
"text": "Measuring the similarity between nominal variables is an important problem in data mining. It's the base to measure the similarity of data objects which contain nominal variables. There are two kinds of traditional methods for this task, the first one simply distinguish variables by same or not same while the second one measures the similarity based on co-occurrence with variables of other attributes. Though they perform well in some conditions, but are still not enough in accuracy. This paper proposes an algorithm to measure the similarity between nominal variables of the same attribute based on the fact that the similarity between nominal variables depends on the relationship between subsets which hold them in the same dataset. This algorithm use the difference of the distribution which is quantified by f-divergence to form feature vector of nominal variables. The theoretical analysis helps to choose the best metric from four most common used forms of f-divergence. Time complexity of the method is linear with the size of dataset and it makes this method suitable for processing the large-scale data. The experiments which use the derived similarity metrics with K-modes on extensive UCI datasets demonstrate the effectiveness of our proposed method.",
"title": ""
},
{
"docid": "9bc182298ad6158dbb5de4da15353312",
"text": "We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or pairs of data. We derive a training algorithm for Spectral Inference Networks that addresses the bias in the gradients due to finite batch size and allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets as well as the Arcade Learning Environment. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators, can discover interpretable representations from video and find meaningful subgoals in reinforcement learning environments.",
"title": ""
},
{
"docid": "9126eda46fe299bc3067bace979cdf5e",
"text": "This paper considers the intersection of technology and play through the novel approach of gamification and its application to early years education. The intrinsic connection between play and technology is becoming increasingly significant in early years education. By creating an awareness of the early years adoption of technology into guiding frameworks, and then exploring the makeup of gaming elements, this paper draws connections for guiding principles in adopting more technology-focused play opportunities for Generation Alpha.",
"title": ""
},
{
"docid": "74c6600ea1027349081c08c687119ee3",
"text": "Segmentation of clitics has been shown to improve accuracy on a variety of Arabic NLP tasks. However, state-of-the-art Arabic word segmenters are either limited to formal Modern Standard Arabic, performing poorly on Arabic text featuring dialectal vocabulary and grammar, or rely on linguistic knowledge that is hand-tuned for each dialect. We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. Experiments show that our system outperforms existing systems on broadcast news and Egyptian dialect, improving segmentation F1 score on a recently released Egyptian Arabic corpus to 92.09%, compared to 91.60% for another segmenter designed specifically for Egyptian Arabic.",
"title": ""
},
{
"docid": "d3ae7f70b1d3fb1fbbf5fe9cd1a33bc8",
"text": "Due to significant advances in SAT technology in the last years, its use for solving constraint satisfaction problems has been gaining wide acceptance. Solvers for satisfiability modulo theories (SMT) generalize SAT solving by adding the ability to handle arithmetic and other theories. Although there are results pointing out the adequacy of SMT solvers for solving CSPs, there are no available tools to extensively explore such adequacy. For this reason, in this paper we introduce a tool for translating FLATZINC (MINIZINC intermediate code) instances of CSPs to the standard SMT-LIB language. We provide extensive performance comparisons between state-of-the-art SMT solvers and most of the available FLATZINC solvers on standard FLATZINC problems. The obtained results suggest that state-of-the-art SMT solvers can be effectively used to solve CSPs.",
"title": ""
},
{
"docid": "a9975365f0bad734b77b67f63bdf7356",
"text": "Most existing models for multilingual natural language processing (NLP) treat language as a discrete category, and make predictions for either one language or the other. In contrast, we propose using continuous vector representations of language. We show that these can be learned efficiently with a character-based neural language model, and used to improve inference about language varieties not seen during training. In experiments with 1303 Bible translations into 990 different languages, we empirically explore the capacity of multilingual language models, and also show that the language vectors capture genetic relationships between languages.",
"title": ""
},
{
"docid": "b191b9829aac1c1e74022c33e2488bbd",
"text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.",
"title": ""
},
{
"docid": "ca5eaacea8702798835ca585200b041d",
"text": "ccupational Health Psychology concerns the application of psychology to improving the quality of work life and to protecting and promoting the safety, health, and well-being of workers. Contrary to what its name suggests, Occupational Health Psychology has almost exclusively dealt with ill health and poor wellbeing. For instance, a simple count reveals that about 95% of all articles that have been published so far in the leading Journal of Occupational Health Psychology have dealt with negative aspects of workers' health and well-being, such as cardiovascular disease, repetitive strain injury, and burnout. In contrast, only about 5% of the articles have dealt with positive aspects such as job satisfaction, commitment, and motivation. However, times appear to be changing. Since the beginning of this century, more attention has been paid to what has been coined positive psychology: the scientific study of human strength and optimal functioning. This approach is considered to supplement the traditional focus of psychology on psychopathology, disease, illness, disturbance, and malfunctioning. The emergence of positive (organizational) psychology has naturally led to the increasing popularity of positive aspects of health and well-being in Occupational Health Psychology. One of these positive aspects is work engagement, which is considered to be the antithesis of burnout. While burnout is usually defined as a syndrome of exhaustion, cynicism, and reduced professional efficacy, engagement is defined as a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption. Engaged employees have a sense of energetic and effective connection with their work activities. Since this new concept was proposed by Wilmar Schaufeli (Utrecht University, the Netherlands) in 2001, 93 academic articles mainly focusing on the measurement of work engagement and its possible antecedents and consequences have been published (see www.schaufeli.com). In addition, major international academic conferences organized by the International Commission on Occupational 171",
"title": ""
},
{
"docid": "0b1b4c8d501c3b1ab350efe4f2249978",
"text": "Motivated by formation control of multiple non-holonomic mobile robots, this paper presents a trajectory tracking control scheme design for nonholonomic mobile robots that are equipped with low-level linear and angular velocities control systems. The design includes a nonlinear kinematic trajectory tracking control law and a tracking control gains selection method that provide a means to implement the nonlinear tracking control law systematically based on the dynamic control performance of the robot's low-level control systems. In addition, the proposed scheme, by design, enables the mobile robot to execute reference trajectories that are represented by time-parameterized waypoints. This feature provides the scheme a generic interface with higher-level trajectory planners. The trajectory tracking control scheme is validated using an iRobot Packbot's parameteric model estimated from experimental data.",
"title": ""
},
{
"docid": "f48d87cb95488bba0c7e903e8bc20726",
"text": "We address the problem of generating multiple hypotheses for structured prediction tasks that involve interaction with users or successive components in a cascaded architecture. Given a set of multiple hypotheses, such components/users typically have the ability to retrieve the best (or approximately the best) solution in this set. The standard approach for handling such a scenario is to first learn a single-output model and then produce M -Best Maximum a Posteriori (MAP) hypotheses from this model. In contrast, we learn to produce multiple outputs by formulating this task as a multiple-output structured-output prediction problem with a loss-function that effectively captures the setup of the problem. We present a max-margin formulation that minimizes an upper-bound on this lossfunction. Experimental results on image segmentation and protein side-chain prediction show that our method outperforms conventional approaches used for this type of scenario and leads to substantial improvements in prediction accuracy.",
"title": ""
},
{
"docid": "5aed256aaca0a1f2fe8a918e6ffb62bd",
"text": "Zero-shot learning (ZSL) enables solving a task without the need to see its examples. In this paper, we propose two ZSL frameworks that learn to synthesize parameters for novel unseen classes. First, we propose to cast the problem of ZSL as learning manifold embeddings from graphs composed of object classes, leading to a flexible approach that synthesizes “classifiers” for the unseen classes. Then, we define an auxiliary task of synthesizing “exemplars” for the unseen classes to be used as an automatic denoising mechanism for any existing ZSL approaches or as an effective ZSL model by itself. On five visual recognition benchmark datasets, we demonstrate the superior performances of our proposed frameworks in various scenarios of both conventional and generalized ZSL. Finally, we provide valuable insights through a series of empirical analyses, among which are a comparison of semantic representations on the full ImageNet benchmark as well as a comparison of metrics used in generalized ZSL. Our code and data are publicly available at https: //github.com/pujols/Zero-shot-learning-journal. Soravit Changpinyo Google AI E-mail: [email protected] Wei-Lun Chao Cornell University, Department of Computer Science E-mail: [email protected] Boqing Gong Tencent AI Lab E-mail: [email protected] Fei Sha University of Southern California, Department of Computer Science E-mail: [email protected]",
"title": ""
},
{
"docid": "73a02535ca36f6233319536f70975366",
"text": "Structured decorative patterns are common ornamentations in a variety of media like books, web pages, greeting cards and interior design. Creating such art from scratch using conventional software is time consuming for experts and daunting for novices. We introduce DecoBrush, a data-driven drawing system that generalizes the conventional digital \"painting\" concept beyond the scope of natural media to allow synthesis of structured decorative patterns following user-sketched paths. The user simply selects an example library and draws the overall shape of a pattern. DecoBrush then synthesizes a shape in the style of the exemplars but roughly matching the overall shape. If the designer wishes to alter the result, DecoBrush also supports user-guided refinement via simple drawing and erasing tools. For a variety of example styles, we demonstrate high-quality user-constrained synthesized patterns that visually resemble the exemplars while exhibiting plausible structural variations.",
"title": ""
},
{
"docid": "0e37a1a251c97fd88aa2ab3ee9ed422b",
"text": "k-means algorithm and its variations are known to be fast clustering algorithms. However, they are sensitive to the choice of starting points and inefficient for solving clustering problems in large data sets. Recently, a new version of the k-means algorithm, the global k-means algorithm has been developed. It is an incremental algorithm that dynamically adds one cluster center at a time and uses each data point as a candidate for the k-th cluster center. Results of numerical experiments show that the global k-means algorithm considerably outperforms the k-means algorithms. In this paper, a new version of the global k-means algorithm is proposed. A starting point for the k-th cluster center in this algorithm is computed by minimizing an auxiliary cluster function. Results of numerical experiments on 14 data sets demonstrate the superiority of the new algorithm, however, it requires more computational time than the global k-means algorithm.",
"title": ""
},
{
"docid": "bd1523c64d8ec69d87cbe68a4d73ea17",
"text": "BACKGROUND AND OBJECTIVE\nThe effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library.\n\n\nMETHODS\nBased on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library.\n\n\nRESULTS\nWe have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest.\n\n\nCONCLUSIONS\nThe IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems.",
"title": ""
},
{
"docid": "c9fdd453232bc1ebd540624f5c81c65b",
"text": "A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason, full backpropagation through time is rarely used on long sequences, and truncated backpropagation through time is used as a heuristic. However, this usually leads to biased estimates of the gradient in which longer term dependencies are ignored. Addressing this issue, we propose an alternative algorithm, Sparse Attentive Backtracking, which might also be related to principles used by brains to learn long-term dependencies. Sparse Attentive Backtracking learns an attention mechanism over the hidden states of the past and selectively backpropagates through paths with high attention weights. This allows the model to learn long term dependencies while only backtracking for a small number of time steps, not just from the recent past but also from attended relevant past states.",
"title": ""
},
{
"docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] |
scidocsrr
|
2ec0e3e57d26dae24fababe657861105
|
Cognitive control, hierarchy, and the rostro–caudal organization of the frontal lobes
|
[
{
"docid": "3b06ce783d353cff3cdbd9a60037162e",
"text": "The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the ‘rules’ for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.",
"title": ""
}
] |
[
{
"docid": "59b8ee881f8d458ee3d5a42ef2db662f",
"text": "For object detection, the two-stage approach (e.g., Faster R-CNN) has been achieving the highest accuracy, whereas the one-stage approach (e.g., SSD) has the advantage of high efficiency. To inherit the merits of both while overcoming their disadvantages, in this paper, we propose a novel single-shot based detector, called RefineDet, that achieves better accuracy than two-stage methods and maintains comparable efficiency of one-stage methods. RefineDet consists of two inter-connected modules, namely, the anchor refinement module and the object detection module. Specifically, the former aims to (1) filter out negative anchors to reduce search space for the classifier, and (2) coarsely adjust the locations and sizes of anchors to provide better initialization for the subsequent regressor. The latter module takes the refined anchors as the input from the former to further improve the regression accuracy and predict multi-class label. Meanwhile, we design a transfer connection block to transfer the features in the anchor refinement module to predict locations, sizes and class labels of objects in the object detection module. The multitask loss function enables us to train the whole network in an end-to-end way. Extensive experiments on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO demonstrate that RefineDet achieves state-of-the-art detection accuracy with high efficiency. Code is available at https://github.com/sfzhang15/RefineDet.",
"title": ""
},
{
"docid": "61ff8f4f212aa0a307b228ab48beec77",
"text": "One of the most important features of the Web graph and social networks is that they are constantly evolving. The classical computational paradigm, which assumes a fixed data set as an input to an algorithm that terminates, is inadequate for such settings. In this paper we study the problem of computing PageRank on an evolving graph. We propose an algorithm that, at any moment in the time and by crawling a small portion of the graph, provides an estimate of the PageRank that is close to the true PageRank of the graph at that moment. We will also evaluate our algorithm experimentally on real data sets and on randomly generated inputs. Under a stylized model of graph evolution, we show that our algorithm achieves a provable performance guarantee that is significantly better than the naive algorithm that crawls the nodes in a round-robin fashion.",
"title": ""
},
{
"docid": "0600b610a9ebb3fcd275c5820b37cb5b",
"text": "In this paper, we solve the following data summarization problem: given a multi-dimensional data set augmented with a binary attribute, how can we construct an interpretable and informative summary of the factors affecting the binary attribute in terms of the combinations of values of the dimension attributes? We refer to such summaries as explanation tables. We show the hardness of constructing optimally-informative explanation tables from data, and we propose effective and efficient heuristics. The proposed heuristics are based on sampling and include optimizations related to computing the information content of a summary from a sample of the data. Using real data sets, we demonstrate the advantages of explanation tables compared to related approaches that can be adapted to solve our problem, and we show significant performance benefits of our optimizations.",
"title": ""
},
{
"docid": "47ee1b71ed10b64110b84e5eecf2857c",
"text": "Measurements for future outdoor cellular systems at 28 GHz and 38 GHz were conducted in urban microcellular environments in New York City and Austin, Texas, respectively. Measurements in both line-of-sight and non-line-of-sight scenarios used multiple combinations of steerable transmit and receive antennas (e.g. 24.5 dBi horn antennas with 10.9° half power beamwidths at 28 GHz, 25 dBi horn antennas with 7.8° half power beamwidths at 38 GHz, and 13.3 dBi horn antennas with 24.7° half power beamwidths at 38 GHz) at different transmit antenna heights. Based on the measured data, we present path loss models suitable for the development of fifth generation (5G) standards that show the distance dependency of received power. In this paper, path loss is expressed in easy-to-use formulas as the sum of a distant dependent path loss factor, a floating intercept, and a shadowing factor that minimizes the mean square error fit to the empirical data. The new models are compared with previous models that were limited to using a close-in free space reference distance. Here, we illustrate the differences of the two modeling approaches, and show that a floating intercept model reduces the shadow factors by several dB and offers smaller path loss exponents while simultaneously providing a better fit to the empirical data. The upshot of these new path loss models is that coverage is actually better than first suggested by work in [1], [7] and [8].",
"title": ""
},
{
"docid": "0a0ec569738b90f44b0c20870fe4dc2f",
"text": "Transactional memory provides a concurrency control mechanism that avoids many of the pitfalls of lock-based synchronization. Researchers have proposed several different implementations of transactional memory, broadly classified into software transactional memory (STM) and hardware transactional memory (HTM). Both approaches have their pros and cons: STMs provide rich and flexible transactional semantics on stock processors but incur significant overheads. HTMs, on the other hand, provide high performance but implement restricted semantics or add significant hardware complexity. This paper is the first to propose architectural support for accelerating transactions executed entirely in software. We propose instruction set architecture (ISA) extensions and novel hardware mechanisms that improve STM performance. We adapt a high-performance STM algorithm supporting rich transactional semantics to our ISA extensions (called hardware accelerated software transactional memory or HASTM). HASTM accelerates fully virtualized nested transactions, supports language integration, and provides both object-based and cache-line based conflict detection. We have implemented HASTM in an accurate multi-core IA32 simulator. Our simulation results show that (1) HASTM single-thread performance is comparable to a conventional HTM implementation; (2) HASTM scaling is comparable to a STM implementation; and (3) HASTM is resilient to spurious aborts and can scale better than HTM in a multi-core setting. Thus, HASTM provides the flexibility and rich semantics of STM, while giving the performance of HTM.",
"title": ""
},
{
"docid": "c6aa0e5f93d02fdd07e55dfa62aac6bc",
"text": "While CNNs naturally lend themselves to densely sampled data, and sophisticated implementations are available, they lack the ability to efficiently process sparse data. In this work we introduce a suite of tools that exploit sparsity in both the feature maps and the filter weights, and thereby allow for significantly lower memory footprints and computation times than the conventional dense framework when processing data with a high degree of sparsity. Our scheme provides (i) an efficient GPU implementation of a convolution layer based on direct, sparse convolution; (ii) a filter step within the convolution layer, which we call attention, that prevents fill-in, i.e., the tendency of convolution to rapidly decrease sparsity, and guarantees an upper bound on the computational resources; and (iii) an adaptation of the backpropagation algorithm, which makes it possible to combine our approach with standard learning frameworks, while still exploiting sparsity in the data and the model.",
"title": ""
},
{
"docid": "5d3c40fb9ba76961b19c7bd773644e55",
"text": "Syntactic structures, by their nature, reflect first and foremost the formal constructions used for expressing meanings. This renders them sensitive to formal variation both within and across languages, and limits their value to semantic applications. We present UCCA, a novel multi-layered framework for semantic representation that aims to accommodate the semantic distinctions expressed through linguistic utterances. We demonstrate UCCA’s portability across domains and languages, and its relative insensitivity to meaning-preserving syntactic variation. We also show that UCCA can be effectively and quickly learned by annotators with no linguistic background, and describe the compilation of a UCCAannotated corpus.",
"title": ""
},
{
"docid": "38863f217a610af5378c42e03cd3fe3c",
"text": "In human movement learning, it is most common to teach constituent elements of complex movements in isolation, before chaining them into complex movements. Segmentation and recognition of observed movement could thus proceed out of this existing knowledge, which is directly compatible with movement generation. In this paper, we address exactly this scenario. We assume that a library of movement primitives has already been taught, and we wish to identify elements of the library in a complex motor act, where the individual elements have been smoothed together, and, occasionally, there might be a movement segment that is not in our library yet. We employ a flexible machine learning representation of movement primitives based on learnable nonlinear attractor system. For the purpose of movement segmentation and recognition, it is possible to reformulate this representation as a controlled linear dynamical system. An Expectation-Maximization algorithm can be developed to estimate the open parameters of a movement primitive from the library, using as input an observed trajectory piece. If no matching primitive from the library can be found, a new primitive is created. This process allows a straightforward sequential segmentation of observed movement into known and new primitives, which are suitable for robot imitation learning. We illustrate our approach with synthetic examples and data collected from human movement. Appearing in Proceedings of the 15 International Conference on Artificial Intelligence and Statistics (AISTATS) 2012, La Palma, Canary Islands. Volume XX of JMLR: W&CP XX. Copyright 2012 by the authors.",
"title": ""
},
{
"docid": "38ecb51f7fca71bd47248987866a10d2",
"text": "Machine Translation has been a topic of research from the past many years. Many methods and techniques have been proposed and developed. However, quality of translation has always been a matter of concern. In this paper, we outline a target language generation mechanism with the help of language English-Sanskrit language pair using rule based machine translation technique [1]. Rule Based Machine Translation provides high quality translation and requires in depth knowledge of the language apart from real world knowledge and the differences in cultural background and conceptual divisions. A string of English sentence can be translated into string of Sanskrit ones. The methodology for design and development is implemented in the form of software named as “EtranS”. KeywordsAnalysis, Machine translation, translation theory, Interlingua, language divergence, Sanskrit, natural language processing.",
"title": ""
},
{
"docid": "78a6af6e87f82ac483b213f04b1ce405",
"text": "Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. To protect the confidentiality been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication. Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct testbed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.",
"title": ""
},
{
"docid": "809b40cd0089410592d7b7f77f04c8e4",
"text": "This paper presents a new method for segmentation and interpretation of 3D point clouds from mobile LIDAR data. The main contribution of this work is the automatic detection and classification of artifacts located at the ground level. The detection is based on Top-Hat of hole filling algorithm of range images. Then, several features are extracted from the detected connected components (CCs). Afterward, a stepwise forward variable selection by using Wilk's Lambda criterion is performed. Finally, CCs are classified in four categories (lampposts, pedestrians, cars, the others) by using a SVM machine learning method.",
"title": ""
},
{
"docid": "fe0863027b80e28fe8c20cff5781a547",
"text": "We describe the implementation and use of a reverse compiler from Analog Devices 21xx assembler source to ANSI-C (with optional use of the language extensions for the TMS320C6x processors) which has been used to port substantial applications. The main results of this work are that reverse compilation is feasible and that some of the features that make small DSP's hard to compile for actually assist the process of reverse compilation compared to that of a general purpose processor. We present statistics on the occurrence of non-statically visible features of hand-written assembler code and look at the quality of the code generated by an optimising ANSI-C compiler from our reverse compiled source and compare it to code generated from conventionally authored ANSI-C programs.",
"title": ""
},
{
"docid": "4aa57effeb552b916d77f2d9ee9f36c5",
"text": "G protein-coupled receptors (GPCRs) represent the largest family of transmembrane receptors and are prime therapeutic targets. The odorant and taste receptors account for over half of the GPCR repertoire, yet they are generally excluded from large-scale, drug candidate analyses. Accumulating molecular evidence indicates that the odorant and taste receptors are widely expressed throughout the body and functional beyond the oronasal cavity - with roles including nutrient sensing, autophagy, muscle regeneration, regulation of gut motility, protective airway reflexes, bronchodilation, and respiratory disease. Given this expanding array of actions, the restricted perception of these GPCRs as mere mediators of smell and taste is outdated. Moreover, delineation of the precise actions of odorant and taste GPCRs continues to be hampered by the relative paucity of selective and specific experimental tools, as well as the lack of defined receptor pharmacology. In this review, we summarize the evidence for expression and function of odorant and taste receptors in tissues beyond the nose and mouth, and we highlight their broad potential in physiology and pathophysiology.",
"title": ""
},
{
"docid": "9065f203a7efd45d2b928f3fd6be3876",
"text": "•An interaction between warfarin and cannabidiol is described•The mechanisms of cannabidiol and warfarin metabolism are reviewed•Mechanism of the interaction is proposed•INR should be monitored in patients when cannabinoids are introduced.",
"title": ""
},
{
"docid": "aa1c565018371cf12e703e06f430776b",
"text": "We propose a graph-based semantic model for representing document content. Our method relies on the use of a semantic network, namely the DBpedia knowledge base, for acquiring fine-grained information about entities and their semantic relations, thus resulting in a knowledge-rich document model. We demonstrate the benefits of these semantic representations in two tasks: entity ranking and computing document semantic similarity. To this end, we couple DBpedia's structure with an information-theoretic measure of concept association, based on its explicit semantic relations, and compute semantic similarity using a Graph Edit Distance based measure, which finds the optimal matching between the documents' entities using the Hungarian method. Experimental results show that our general model outperforms baselines built on top of traditional methods, and achieves a performance close to that of highly specialized methods that have been tuned to these specific tasks.",
"title": ""
},
{
"docid": "54bf44e04920bdaa7388dbbbbd34a1a8",
"text": "TIDs have been detected using various measurement techniques, including HF sounders, incoherent scatter radars, in-situ measurements, and optical techniques. However, there is still much we do not yet know or understand about TIDs. Observations of TIDs have tended to be sparse, and there is a need for additional observations to provide new scientific insight into the geophysical source phenomenology and wave propagation physics. The dense network of GPS receivers around the globe offers a relatively new data source to observe and monitor TIDs. In this paper, we use Total Electron Content (TEC) measurements from 4000 GPS receivers throughout the continental United States to observe TIDs associated with the 11 March 2011 Tohoku tsunami. The tsunami propagated across the Pacific to the US west coast over several hours, and corresponding TIDs were observed over Hawaii, and via the network of GPS receivers in the US. The network of GPS receivers in effect provides a 2D spatial map of TEC perturbations, which can be used to calculate TID parameters, including horizontal wavelength, speed, and period. Well-formed, planar traveling ionospheric disturbances were detected over the west coast of the US ten hours after the earthquake. Fast Fourier transform analysis of the observed waveforms revealed that the period of the wave was 15.1 minutes with a horizontal wavelength of 194.8 km, phase velocity of 233.0 m/s, and an azimuth of 105.2 (propagating nearly due east in the direction of the tsunami wave). These results are consistent with TID observations in airglow measurements from Hawaii earlier in the day, and with other GPS TEC observations. The vertical wavelength of the TID was found to be 43.5 km. The TIDs moved at the same velocity as the tsunami itself. Much work is still needed in order to fully understand the ocean-atmosphere coupling mechanisms, which could lead to the development of effective tsunami detection/warning systems. The work presented in this paper demonstrates a technique for the study of ionospheric perturbations that can affect navigation, communications and surveillance systems.",
"title": ""
},
{
"docid": "4028f1cd20127f3c6599e6073bb1974b",
"text": "This paper presents a power delivery monitor (PDM) peripheral integrated in a flip-chip packaged 28 nm system-on-chip (SoC) for mobile computing. The PDM is composed entirely of digital standard cells and consists of: 1) a fully integrated VCO-based digital sampling oscilloscope; 2) a synthetic current load; and 3) an event engine for triggering, analysis, and debug. Incorporated inside an SoC, it enables rapid, automated analysis of supply impedance, as well as monitoring supply voltage droop of multi-core CPUs running full software workloads and during scan-test operations. To demonstrate these capabilities, we describe a power integrity case study of a dual-core ARM Cortex-A57 cluster in a commercial 28 nm mobile SoC. Measurements are presented of power delivery network (PDN) electrical parameters, along with waveforms of the CPU cluster running test cases and benchmarks on bare metal and Linux OS. The effect of aggressive power management techniques, such as power gating on the dominant resonant frequency and peak impedance, is highlighted. Finally, we present measurements of supply voltage noise during various scan-test operations, an often-neglected aspect of SoC power integrity.",
"title": ""
},
{
"docid": "7bb1d856e5703afb571cf781d48ce403",
"text": "RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction.",
"title": ""
},
{
"docid": "5d5e42cdb2521c5712b372acaf7fb25a",
"text": "Unsupervised anomaly detection on multior high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score.",
"title": ""
},
{
"docid": "37501837b77c336d01f751a0a2fafd1d",
"text": "Brain-inspired Hyperdimensional (HD) computing emulates cognition tasks by computing with hypervectors rather than traditional numerical values. In HD, an encoder maps inputs to high dimensional vectors (hypervectors) and combines them to generate a model for each existing class. During inference, HD performs the task of reasoning by looking for similarities of the input hypervector and each pre-stored class hypervector However, there is not a unique encoding in HD which can perfectly map inputs to hypervectors. This results in low HD classification accuracy over complex tasks such as speech recognition. In this paper we propose MHD, a multi-encoder hierarchical classifier, which enables HD to take full advantages of multiple encoders without increasing the cost of classification. MHD consists of two HD stages: a main stage and a decider stage. The main stage makes use of multiple classifiers with different encoders to classify a wide range of input data. Each classifier in the main stage can trade between efficiency and accuracy by dynamically varying the hypervectors' dimensions. The decider stage, located before the main stage, learns the difficulty of the input data and selects an encoder within the main stage that will provide the maximum accuracy, while also maximizing the efficiency of the classification task. We test the accuracy/efficiency of the proposed MHD on speech recognition application. Our evaluation shows that MHD can provide a 6.6× improvement in energy efficiency and a 6.3× speedup, as compared to baseline single level HD.",
"title": ""
}
] |
scidocsrr
|
c5905b05ffa2ba05bbf7760ee78d5d5c
|
Off-grid electricity generation with renewable energy technologies in India : An application of HOMER
|
[
{
"docid": "b9ca95f39dffa8c0d75f713708b576cd",
"text": "Renewable energy sources are gradually being recognized as important options in supply side planning for microgrids. This paper focuses on the optimal design, planning, sizing and operation of a hybrid, renewable energy based microgrid with the goal of minimizing the lifecycle cost, while taking into account environmental emissions. Four different cases including a diesel-only, a fully renewable-based, a diesel-renewable mixed, and an external grid-connected microgrid configurations are designed, to compare and evaluate their economics, operational performance and environmental emissions. Analysis is also carried out to determine the break-even economics for a grid-connected microgrid. The wellknown energy modeling software for hybrid renewable energy systems, HOMER is used in the studies reported in this paper. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "444a6e64bfc9a76a9ef6d122e746e457",
"text": "When performing tasks, humans are thought to adopt task sets that configure moment-to-moment data processing. Recently developed mixed blocked/event-related designs allow task set-related signals to be extracted in fMRI experiments, including activity related to cues that signal the beginning of a task block, \"set-maintenance\" activity sustained for the duration of a task block, and event-related signals for different trial types. Data were conjointly analyzed from mixed design experiments using ten different tasks and 183 subjects. Dorsal anterior cingulate cortex/medial superior frontal cortex (dACC/msFC) and bilateral anterior insula/frontal operculum (aI/fO) showed reliable start-cue and sustained activations across all or nearly all tasks. These regions also carried the most reliable error-related signals in a subset of tasks, suggesting that the regions form a \"core\" task-set system. Prefrontal regions commonly related to task control carried task-set signals in a smaller subset of tasks and lacked convergence across signal types.",
"title": ""
},
{
"docid": "546f96600d90107ed8262ad04274b012",
"text": "Large-scale labeled training datasets have enabled deep neural networks to excel on a wide range of benchmark vision tasks. However, in many applications it is prohibitively expensive or timeconsuming to obtain large quantities of labeled data. To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled target domain. Unfortunately, direct transfer across domains often performs poorly due to domain shift and dataset bias. Domain adaptation is the machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. In this paper, we summarize and compare the latest unsupervised domain adaptation methods in computer vision applications. We classify the non-deep approaches into sample re-weighting and intermediate subspace transformation categories, while the deep strategy includes discrepancy-based methods, adversarial generative models, adversarial discriminative models and reconstruction-based methods. We also discuss some potential directions.",
"title": ""
},
{
"docid": "059e8e43e6e57565e2aa319c1d248a3b",
"text": "BACKGROUND\nWhile depression is known to involve a disturbance of mood, movement and cognition, its associated cognitive deficits are frequently viewed as simple epiphenomena of the disorder.\n\n\nAIMS\nTo review the status of cognitive deficits in depression and their putative neurobiological underpinnings.\n\n\nMETHOD\nSelective computerised review of the literature examining cognitive deficits in depression and their brain correlates.\n\n\nRESULTS\nRecent studies report both mnemonic deficits and the presence of executive impairment--possibly selective for set-shifting tasks--in depression. Many studies suggest that these occur independent of age, depression severity and subtype, task 'difficulty', motivation and response bias: some persist upon clinical 'recovery'.\n\n\nCONCLUSIONS\nMnemonic and executive deficits do no appear to be epiphenomena of depressive disorder. A focus on the interactions between motivation, affect and cognitive function may allow greater understanding of the interplay between key aspects of the dorsal and ventral aspects of the prefrontal cortex in depression.",
"title": ""
},
{
"docid": "a64847d15292f9758a337b8481bc7814",
"text": "This paper studies the use of tree edit distance for pattern matching of abstract syntax trees of images generated with tree picture grammars. This was done with a view to measuring its effectiveness in determining image similarity, when compared to current state of the art similarity measures used in Content Based Image Retrieval (CBIR). Eight computer based similarity measures were selected for their diverse methodology and effectiveness. The eight visual descriptors and tree edit distance were tested against some of the images from our corpus of thousands of syntactically generated images. The first and second sets of experiments showed that tree edit distance and Spacial Colour Distribution (SpCD) are the most suited for determining similarity of syntactically generated images. A third set of experiments was performed with tree edit distance and SpCD only. Results obtained showed that while both of them performed well in determining similarity of the generated images, the tree edit distance is better able to detect more subtle human observable image differences than SpCD. Also, tree edit distance more closely models the generative sequence of these tree picture grammars.",
"title": ""
},
{
"docid": "3132ed8b0f2e257c3e9e8b0a716cd72c",
"text": "Auditory evoked potentials were recorded from the vertex of subjects who listened selectively to a series of tone pips in one ear and ignored concurrent tone pips in the other ear. The negative component of the evoked potential peaking at 80 to 110 milliseconds was substantially larger for the attended tones. This negative component indexed a stimulus set mode of selective attention toward the tone pips in one ear. A late positive component peaking at 250 to 400 milliseconds reflected the response set established to recognize infrequent, higher pitched tone pips in the attended series.",
"title": ""
},
{
"docid": "9b0ddf08b06c625ea579d9cee6c8884b",
"text": "A frequency-reconfigurable bow-tie antenna for Bluetooth, WiMAX, and WLAN applications is proposed. The bow-tie radiator is printed on two sides of the substrate and is fed by a microstripline continued by a pair of parallel strips. By embedding p-i-n diodes over the bow-tie arms, the effective electrical length of the antenna can be changed, leading to an electrically tunable operating band. The simple biasing circuit used in this design eliminates the need for extra bias lines, and thus avoids distortion of the radiation patterns. Measured results are in good agreement with simulations, which shows that the proposed antenna can be tuned to operate in either 2.2-2.53, 2.97-3.71, or 4.51-6 GHz band with similar radiation patterns.",
"title": ""
},
{
"docid": "43071b49420f14d9c2affe3c12e229ae",
"text": "The Gatekeeper is a vision-based door security system developed at the MIT Artificial Intelligence Laboratory. Faces are detected in a real-time video stream using an efficient algorithmic approach, and are recognized using principal component analysis with class specific linear projection. The system sends commands to an automatic sliding door, speech synthesizer, and touchscreen through a multi-client door control server. The software for the Gatekeeper was written using a set of tools created by the author to facilitate the development of real-time machine vision applications in Matlab, C, and Java.",
"title": ""
},
{
"docid": "320c5bf641fa348cd1c8fb806558fe68",
"text": "A CMOS low-dropout regulator (LDO) with 3.3 V output voltage and 100 mA output current for system-on-chip applications is presented. The proposed LDO is independent of off-chip capacitor, thus the board space and external pins are reduced. By utilizing dynamic slew-rate enhancement (SRE) circuit and nested Miller compensation (NMC) on LDO structure, the proposed LDO provides high stability during line and load regulation without off-chip load capacitor. The overshot voltage has been limited within 550 mV and settling time is less than 50 mus when load current reducing from 100 mA to 1 mA. By using 30 nA reference current, the quiescent current is 3.3 muA. The experiment results agree with the simulation results. The proposed design is implemented by CSMC 0.5 mum mixed-signal process.",
"title": ""
},
{
"docid": "67714032417d9c04d0e75897720ad90a",
"text": "Artificial Intelligence has always lent a helping hand to the practitioners of medicine for improving medical diagnosis and treatment then, paradigm of artificial neural networks is shortly introduced and the main problems of medical data base and the basic approaches for training and testing a network by medical data are described. A lot of Applications tried to help human experts, offering a solution. This paper describes a optimal feed forward Back propagation algorithm. Feedforward back propagation neural network is used as a classifier to distinguish between infected or non-infected person in both cases. However, Traditional Back propagation algorithm has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The back propagation algorithm presented in this paper used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a back propagation algorithm and achieved the best performance with the minimum epoch (training iterations) and training time. Keywords— Artificial Neural Network, Back propagation algorithm, Medical Diagnosis, Neural Networks.",
"title": ""
},
{
"docid": "aaa7da397279fc5b17a110b1e5d56cb0",
"text": "This study evaluates whether focusing on using specific muscles during bench press can selectively activate these muscles. Altogether 18 resistance-trained men participated. Subjects were familiarized with the procedure and performed one-maximum repetition (1RM) test during the first session. In the second session, 3 different bench press conditions were performed with intensities of 20, 40, 50, 60 and 80 % of the pre-determined 1RM: regular bench press, and bench press focusing on selectively using the pectoralis major and triceps brachii, respectively. Surface electromyography (EMG) signals were recorded for the triceps brachii and pectoralis major muscles. Subsequently, peak EMG of the filtered signals were normalized to maximum maximorum EMG of each muscle. In both muscles, focusing on using the respective muscles increased muscle activity at relative loads between 20 and 60 %, but not at 80 % of 1RM. Overall, a threshold between 60 and 80 % rather than a linear decrease in selective activation with increasing intensity appeared to exist. The increased activity did not occur at the expense of decreased activity of the other muscle, e.g. when focusing on activating the triceps muscle the activity of the pectoralis muscle did not decrease. On the contrary, focusing on using the triceps muscle also increased pectoralis EMG at 50 and 60 % of 1RM. Resistance-trained individuals can increase triceps brachii or pectarilis major muscle activity during the bench press when focusing on using the specific muscle at intensities up to 60 % of 1RM. A threshold between 60 and 80 % appeared to exist.",
"title": ""
},
{
"docid": "e8638ac34f416ac74e8e77cdc206ef04",
"text": "The modular multilevel converter (M2C) has become an increasingly important topology in medium- and high-voltage applications. A limitation is that it relies on positive and negative half-cycles of the ac output voltage waveform to achieve charge balance on the submodule capacitors. To overcome this constraint a secondary power loop is introduced that exchanges power with the primary power loops at the input and output. Power is exchanged between the primary and secondary loops by using the principle of orthogonality of power flow at different frequencies. Two modular multilevel topologies are proposed to step up or step down dc in medium- and high-voltage dc applications: the tuned filter modular multilevel dc converter and the push-pull modular multilevel dc converter. An analytical simulation of the latter converter is presented to explain the operation.",
"title": ""
},
{
"docid": "6fb50b6f34358cf3229bd7645bf42dcd",
"text": "With the in-depth study of sentiment analysis research, finer-grained opinion mining, which aims to detect opinions on different review features as opposed to the whole review level, has been receiving more and more attention in the sentiment analysis research community recently. Most of existing approaches rely mainly on the template extraction to identify the explicit relatedness between product feature and opinion terms, which is insufficient to detect the implicit review features and mine the hidden sentiment association in reviews, which satisfies (1) the review features are not appear explicit in the review sentences; (2) it can be deduced by the opinion words in its context. From an information theoretic point of view, this paper proposed an iterative reinforcement framework based on the improved information bottleneck algorithm to address such problem. More specifically, the approach clusters product features and opinion words simultaneously and iteratively by fusing both their semantic information and co-occurrence information. The experimental results demonstrate that our approach outperforms the template extraction based approaches.",
"title": ""
},
{
"docid": "16915e2da37f8cd6fa1ce3a4506223ff",
"text": "In recent years, significant progress has been made in solving challenging problems across various domains using deep reinforcement learning (RL). Reproducing existing work and accurately judging the improvements offered by novel methods is vital to sustaining this progress. Unfortunately, reproducing results for state-of-the-art deep RL methods is seldom straightforward. In particular, non-determinism in standard benchmark environments, combined with variance intrinsic to the methods, can make reported results tough to interpret. Without significance metrics and tighter standardization of experimental reporting, it is difficult to determine whether improvements over the prior state-of-the-art are meaningful. In this paper, we investigate challenges posed by reproducibility, proper experimental techniques, and reporting procedures. We illustrate the variability in reported metrics and results when comparing against common baselines and suggest guidelines to make future results in deep RL more reproducible. We aim to spur discussion about how to ensure continued progress in the field by minimizing wasted effort stemming from results that are non-reproducible and easily misinterpreted.",
"title": ""
},
{
"docid": "545cd566c3563c7c8f8ab39d044b46d6",
"text": "We present a sequential model for temporal relation classification between intrasentence events. The key observation is that the overall syntactic structure and compositional meanings of the multi-word context between events are important for distinguishing among fine-grained temporal relations. Specifically, our approach first extracts a sequence of context words that indicates the temporal relation between two events, which well align with the dependency path between two event mentions. The context word sequence, together with a parts-of-speech tag sequence and a dependency relation sequence that are generated corresponding to the word sequence, are then provided as input to bidirectional recurrent neural network (LSTM) models. The neural nets learn compositional syntactic and semantic representations of contexts surrounding the two events and predict the temporal relation between them. Evaluation of the proposed approach on TimeBank corpus shows that sequential modeling is capable of accurately recognizing temporal relations between events, which outperforms a neural net model using various discrete features as input that imitates previous feature based models.",
"title": ""
},
{
"docid": "caa252bbfad7ab5c989ae7687818f8ae",
"text": "Nowadays, GPU accelerators are widely used in areas with large data-parallel computations such as scientific computations or neural networks. Programmers can either write code in low-level CUDA/OpenCL code or use a GPU extension for a high-level programming language for better productivity. Most extensions focus on statically-typed languages, but many programmers prefer dynamically-typed languages due to their simplicity and flexibility. \n This paper shows how programmers can write high-level modular code in Ikra, a Ruby extension for array-based GPU computing. Programmers can compose GPU programs of multiple reusable parallel sections, which are subsequently fused into a small number of GPU kernels. We propose a seamless syntax for separating code regions that extensively use dynamic language features from those that are compiled for efficient execution. Moreover, we propose symbolic execution and a program analysis for kernel fusion to achieve performance that is close to hand-written CUDA code.",
"title": ""
},
{
"docid": "eba1168ad00ff93a8b62bbd8bc6d4b8d",
"text": "Multiple (external) representations can provide unique benefits when people are learning complex new ideas. Unfortunately, many studies have shown this promise is not always achieved. The DeFT (Design, Functions, Tasks) framework for learning with multiple representations integrates research on learning, the cognitive science of representation and constructivist theories of education. It proposes that the effectiveness of multiple representations can best be understood by considering three fundamental aspects of learning: the design parameters that are unique to learning with multiple representations; the functions that multiple representations serve in supporting learning and the cognitive tasks that must be undertaken by a learner interacting with multiple representations. The utility of this framework is proposed to be in identifying a broad range of factors that influence learning, reconciling inconsistent experimental findings, revealing under-explored areas of multi-representational research and pointing forward to potential design heuristics for learning with multiple representations. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "36e6b7bfa7043cfc97b189dc652a3461",
"text": "We propose CiteTextRank, a fully unsupervised graph-based algorithm that incorporates evidence from multiple sources (citation contexts as well as document content) in a flexible manner to extract keyphrases. General steps for algorithms for unsupervised keyphrase extraction: 1. Extract candidate words or lexical units from the textual content of the target document by applying stopword and parts-of-speech filters. 2. Score candidate words based on some criterion.",
"title": ""
},
{
"docid": "6d61da17db5c16611409356bd79006c4",
"text": "We examine empirical evidence for religious prosociality, the hypothesis that religions facilitate costly behaviors that benefit other people. Although sociological surveys reveal an association between self-reports of religiosity and prosociality, experiments measuring religiosity and actual prosocial behavior suggest that this association emerges primarily in contexts where reputational concerns are heightened. Experimentally induced religious thoughts reduce rates of cheating and increase altruistic behavior among anonymous strangers. Experiments demonstrate an association between apparent profession of religious devotion and greater trust. Cross-cultural evidence suggests an association between the cultural presence of morally concerned deities and large group size in humans. We synthesize converging evidence from various fields for religious prosociality, address its specific boundary conditions, and point to unresolved questions and novel predictions.",
"title": ""
},
{
"docid": "4bfac9df41641b88fb93f382202c6e85",
"text": "The objective was to evaluate the clinical efficacy of chemomechanical preparation of the root canals with sodium hypochlorite and interappointment medication with calcium hydroxide in the control of root canal infection and healing of periapical lesions. Fifty teeth diagnosed with chronic apical periodontitis were randomly allocated to one of three treatments: Single visit (SV group, n = 20), calcium hydroxide for one week (CH group n = 18), or leaving the canal empty but sealed for one week (EC group, n = 12). Microbiological samples were taken to monitor the infection during treatment. Periapical healing was controlled radiographically following the change in the periapical index at 52 wk and analyzed using one-way ANOVA. All cases showed microbiological growth in the beginning of the treatment. After mechanical preparation and irrigation with sodium hypochlorite in the first appointment, 20 to 33% of the cases showed growth. At the second appointment 33% of the cases in the CH group revealed bacteria, whereas the EC group showed remarkably more culture positive cases (67%). Sodium hypochlorite was effective also at the second appointment and only two teeth remained culture positive. Only minor differences in periapical healing were observed between the treatment groups. However, bacterial growth at the second appointment had a significant negative impact on healing of the periapical lesion (p < 0.01). The present study indicates good clinical efficacy of sodium hypochlorite irrigation in the control of root canal infection. Calcium hydroxide dressing between the appointments did not show the expected effect in disinfection the root canal system and treatment outcome, indicating the need to develop more efficient inter-appointment dressings.",
"title": ""
},
{
"docid": "733d55884f7807b3957716a36b323d2b",
"text": "We demonstrate that Schh onhage storage modiication machines are equivalent , in a strong sense, to unary abstract state machines. We also show t h a t i f one extends the Schh onhage model with a pairing function and removes the unary restriction , then equivalence between the two machine models survives.",
"title": ""
}
] |
scidocsrr
|
af8bd81a8b77cbc2da8fcc4bb8c58337
|
Recursive symmetries for geometrically complex and materially heterogeneous additive manufacturing
|
[
{
"docid": "b57229646d21f8fac2e06b2a6b724782",
"text": "This paper proposes a unified and consistent set of flexible tools to approximate important geometric attributes, including normal vectors and curvatures on arbitrary triangle meshes. We present a consistent derivation of these first and second order differential properties using averaging Voronoi cells and the mixed Finite-Element/Finite-Volume method, and compare them to existing formulations. Building upon previous work in discrete geometry, these operators are closely related to the continuous case, guaranteeing an appropriate extension from the continuous to the discrete setting: they respect most intrinsic properties of the continuous differential operators. We show that these estimates are optimal in accuracy under mild smoothness conditions, and demonstrate their numerical quality. We also present applications of these operators, such as mesh smoothing, enhancement, and quality checking, and show results of denoising in higher dimensions, such as for tensor images.",
"title": ""
},
{
"docid": "6c51618edf4bc0872da39c188ea7e0a9",
"text": "The representation of geometric objects based on volumetric data structures has advantages in many geometry processing applications that require, e.g., fast surface interrogation or boolean operations such as intersection and union. However, surface based algorithms like shape optimization (fairing) or freeform modeling often need a topological manifold representation where neighborhood information within the surface is explicitly available. Consequently, it is necessary to find effective conversion algorithms to generate explicit surface descriptions for the geometry which is implicitly defined by a volumetric data set. Since volume data is usually sampled on a regular grid with a given step width, we often observe severe alias artifacts at sharp features on the extracted surfaces. In this paper we present a new technique for surface extraction that performs feature sensitive sampling and thus reduces these alias effects while keeping the simple algorithmic structure of the standard Marching Cubes algorithm. We demonstrate the effectiveness of the new technique with a number of application examples ranging from CSG modeling and simulation to surface reconstruction and remeshing of polygonal models.",
"title": ""
}
] |
[
{
"docid": "ea05ced84ebdb18e1d80c9ef5744153a",
"text": "Biometrics refers to automatic identification of a person based on his or her physiological or behavioral characteristics which provide a reliable and secure user authentication for the increased security requirements of our personal information compared to traditional identification methods such as passwords and PINs (Jain et al., 2000). Organizations are looking to automate identity authentication systems to improve customer satisfaction and operating efficiency as well as to save critical resources due to the fact that identity fraud in welfare disbursements, credit card transactions, cellular phone calls, and ATM withdrawals totals over $6 billion each year (Jain et al., 1998). Furthermore, as people become more connected electronically, the ability to achieve a highly accurate automatic personal identification system is substantially more critical. Enormous change has occurred in the world of embedded systems driven by the advancement on the integrated circuit technology and the availability of open source. This has opened new challenges and development of advanced embedded system. This scenario is manifested in the appearance of sophisticated new products such as PDAs and cell phones and by the continual increase in the amount of resources that can be packed into a small form factor which requires significant high end skills and knowledge. More people are gearing up to acquire advanced skills and knowledge to keep abreast of the technologies to build advanced embedded system using available Single Board Computer (SBC) with 32 bit architectures.",
"title": ""
},
{
"docid": "49585da1d2c3102683e73dddb830ba36",
"text": "The knowledge pyramid has been used for several years to illustrate the hierarchical relationships between data, information, knowledge, and wisdom. This paper posits that the knowledge pyramid is too basic and fails to represent reality and presents a revised knowledge pyramid. One key difference is that the revised knowledge pyramid includes knowledge management as an extraction of reality with a focus on organizational learning. The model also posits that newer initiatives such as business and/or customer intelligence are the result of confusion in understanding the traditional knowledge pyramid that is resolved in the revised knowledge pyramid.",
"title": ""
},
{
"docid": "362bd9e95f9b0304fa95a647a8a7ee45",
"text": "Cluster labelling is a technique which provides useful information about the cluster to the end users. In this paper, we propose a novel approach which is the follow-up of our previous work. Our earlier approach generates clusters of web documents by using a modified apriori approach which is more efficient and faster than the traditional apriori approach. To label the clusters, the propose approach used an effective feature selection technique which selects the top features of a cluster. Rather than labelling the cluster with ‘bag of words’, a concept driven mechanism has been developed which uses the Wikipedia that takes the top features of a cluster as input to generate the possible candidate labels. Mutual information (MI) score technique has been used for ranking the candidate labels and then the topmost candidates are considered as potential labels of a cluster. Experimental results on two benchmark datasets demonstrate the efficiency of our approach.",
"title": ""
},
{
"docid": "e9939b00b96b816fc6125bffc39c3a1d",
"text": "Fifteen experimental English language question-answering I systems which are programmed and operating are described ) arid reviewed. The systems range from a conversation machine ~] to programs which make sentences about pictures and systems s~ which translate from English into logical calculi. Systems are ~ classified as list-structured data-based, graphic data-based, ~! text-based and inferential. Principles and methods of opera~4 tions are detailed and discussed. It is concluded that the data-base question-answerer has > passed from initial research into the early developmental ~.4 phase. The most difficult and important research questions for ~i~ the advancement of general-purpose language processors are seen to be concerned with measuring meaning, dealing with ambiguities, translating into formal languages and searching large tree structures.",
"title": ""
},
{
"docid": "205c0c94d3f2dbadbc7024c9ef868d97",
"text": "Solid dispersions (SD) of curcuminpolyvinylpyrrolidone in the ratio of 1:2, 1:4, 1:5, 1:6, and 1:8 were prepared in an attempt to increase the solubility and dissolution. Solubility, dissolution, powder X-ray diffraction (XRD), differential scanning calorimetry (DSC) and Fourier transform infrared spectroscopy (FTIR) of solid dispersions, physical mixtures (PM) and curcumin were evaluated. Both solubility and dissolution of curcumin solid dispersions were significantly greater than those observed for physical mixtures and intact curcumin. The powder X-ray diffractograms indicated that the amorphous curcumin was obtained from all solid dispersions. It was found that the optimum weight ratio for curcumin:PVP K-30 is 1:6. The 1:6 solid dispersion still in the amorphous from after storage at ambient temperature for 2 years and the dissolution profile did not significantly different from freshly prepared. Keywords—Curcumin, polyvinylpyrrolidone K-30, solid dispersion, dissolution, physicochemical.",
"title": ""
},
{
"docid": "b401c0a7209d98aea517cf0e28101689",
"text": "This paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Our approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. Our contribution is to constrain the transformation from the input to the output to be locally affine in colorspace, and to express this constraint as a custom fully differentiable energy term. We show that this approach successfully suppresses distortion and yields satisfying photorealistic style transfers in a broad variety of scenarios, including transfer of the time of day, weather, season, and artistic edits.",
"title": ""
},
{
"docid": "62fc80e1eb0f22d470286d1b14dd584b",
"text": "This project examines the level of accuracy that can be achieved in precision positioning by using built-in sensors in an Android smartphone. The project is focused in estimating the position of the phone inside a building where the GPS signal is bad or unavailable. The approach is sensor-fusion: by using data from the device’s different sensors, such as accelerometer, gyroscope and wireless adapter, the position is determined. The results show that the technique is promising for future handheld indoor navigation systems that can be used in malls, museums, large office buildings, hospitals, etc.",
"title": ""
},
{
"docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277",
"text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.",
"title": ""
},
{
"docid": "d1c88428d398caba2dc9a8f79f84a45f",
"text": "In this article, a novel compact reconfigurable antenna based on substrate integrated waveguide (SIW) technology is introduced. The geometry of the proposed antennas is symmetric with respect to the horizontal center line. The electrical shape of the antenna is composed of double H-plane SIW based horn antennas and radio frequency micro electro mechanical system (RF-MEMS) actuators. The RF-MEMS actuators are integrated in the planar structure of the antenna for reconfiguring the radiation pattern by adding nulls to the pattern. The proper activation/deactivation of the switches alters the modes distributed in the structure and changes the radiation pattern. When different combinations of switches are on or off, the radiation patterns have 2, 4, 6, 8, . . . nulls with nearly similar operating frequencies. The attained peak gain of the proposed antenna is higher than 5 dB at any point on the far field radiation pattern except at the null positions. The design procedure and closed form formulation are provided for analytical determination of the antenna parameters. Moreover, the designed antenna with an overall dimensions of only 63.6 × 50 mm2 is fabricated and excited through standard SMA connector and compared with the simulated results. The measured results show that the antenna can clearly alters its beams using the switching components. The proposed antenna retains advantages of low cost, low cross-polarized radiation, and easy integration of configuration.",
"title": ""
},
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "bfdce194fbcbbf3ed8d8251ea253b0de",
"text": "Unlike traditional machine learning methods, humans often learn from natural language instruction. As users become increasingly accustomed to interacting with mobile devices using speech, their interest in instructing these devices in natural language is likely to grow. We introduce our Learning by Instruction Agent (LIA), an intelligent personal agent that users can teach to perform new action sequences to achieve new commands, using solely natural language interaction. LIA uses a CCG semantic parser to ground the semantics of each command in terms of primitive executable procedures defining sensors and effectors of the agent. Given a natural language command that LIA does not understand, it prompts the user to explain how to achieve the command through a sequence of steps, also specified in natural language. A novel lexicon induction algorithm enables LIA to generalize across taught commands, e.g., having been taught how to “forward an email to Alice,” LIA can correctly interpret the command “forward this email to Bob.” A user study involving email tasks demonstrates that users voluntarily teach LIA new commands, and that these taught commands significantly reduce task completion time. These results demonstrate the potential of natural language instruction as a significant, under-explored paradigm for machine",
"title": ""
},
{
"docid": "5a397012744d958bb1a69b435c73e666",
"text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.",
"title": ""
},
{
"docid": "986b23f5c2a9df55c2a8c915479a282a",
"text": "Recurrent neural network language models (RNNLM) have recently demonstrated vast potential in modelling long-term dependencies for NLP problems, ranging from speech recognition to machine translation. In this work, we propose methods for conditioning RNNLMs on external side information, e.g., metadata such as keywords or document title. Our experiments show consistent improvements of RNNLMs using side information over the baselines for two different datasets and genres in two languages. Interestingly, we found that side information in a foreign language can be highly beneficial in modelling texts in another language, serving as a form of cross-lingual language modelling.",
"title": ""
},
{
"docid": "872d1f216a463b354221be8b68d35d96",
"text": "Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.",
"title": ""
},
{
"docid": "4fa7f7f723c2f2eee4c0e2c294273c74",
"text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.",
"title": ""
},
{
"docid": "09afc5d9ed3b56b7cb748d6e5bd124e2",
"text": "A wideband circularly polarized reconfigurable microstrip patch antenna fed by L-shaped probes is presented. Right hand circular polarization and left hand circular polarization could be excited by L-shaped probes feeding a perturbed square patch. The L-shaped probes are connected to a switch which is fabricated underneath the ground plane, such that circularly polarized radiation pattern reconfiguration could be realized. An antenna prototype was fabricated and it attains a bandwidth of over 10% with both SWR < 2 and axial ratio < 3 dB.",
"title": ""
},
{
"docid": "d52a178526eac0438757c20c5a91e51e",
"text": "Recent convolutional neural networks, especially end-to-end disparity estimation models, achieve remarkable performance on stereo matching task. However, existed methods, even with the complicated cascade structure, may fail in the regions of non-textures, boundaries and tiny details. Focus on these problems, we propose a multi-task network EdgeStereo that is composed of a backbone disparity network and an edge sub-network. Given a binocular image pair, our model enables end-to-end prediction of both disparity map and edge map. Basically, we design a context pyramid to encode multi-scale context information in disparity branch, followed by a compact residual pyramid for cascaded refinement. To further preserve subtle details, our EdgeStereo model integrates edge cues by feature embedding and edge-aware smoothness loss regularization. Comparative results demonstrates that stereo matching and edge detection can help each other in the unified model. Furthermore, our method achieves state-of-art performance on both KITTI Stereo and Scene Flow benchmarks, which proves the effectiveness of our design.",
"title": ""
},
{
"docid": "9e15118bd0317faee30c18e0710c8327",
"text": "We aim at developing autonomous miniature hovering flying robots capable of navigating in unstructured GPSdenied environments. A major challenge is the miniaturization of the embedded sensors and processors allowing such platforms to fly autonomously. In this paper, we propose a novel ego-motion estimation algorithm for hovering robots equipped with inertial and optic-flow sensors that runs in realtime on a microcontroller. Unlike many vision-based methods, this algorithm does not rely on feature tracking, structure estimation, additional distance sensors or assumptions about the environment. Key to this method is the introduction of the translational optic-flow direction constraint (TOFDC), which does not use the optic-flow scale, but only its direction to correct for inertial sensor drift during changes of direction. This solution requires comparatively much simpler electronics and sensors and works in environments of any geometries. We demonstrate the implementation of this algorithm on a miniature 46g quadrotor for closed-loop position control.",
"title": ""
},
{
"docid": "a0d1b5c1745fb676163c36644041bafa",
"text": "ive 2.8 3.1 3.3 5.0% Our System 3.6 4.8 4.2 18.0% Human Abstract (reference) 4.2 4.8 4.5 65.5% Sample Summaries • Movie: The Neverending Story • Human: A magical journey about the power of a young boy’s imagination to save a dying fantasy land, The Neverending Story remains a much-loved kids adventure. • LexRank: It pokes along at times and lapses occasionally into dark moments of preachy philosophy, but this is still a charming, amusing and harmless film for kids. • Opinosis: The Neverending Story is a silly fantasy movie that often shows its age . • Our System: The Neverending Story is an entertaining children’s adventure, with heart and imagination to spare.",
"title": ""
},
{
"docid": "d047231a67ca02c525d174b315a0838d",
"text": "The goal of this article is to review the progress of three-electron spin qubits from their inception to the state of the art. We direct the main focus towards the exchange-only qubit (Bacon et al 2000 Phys. Rev. Lett. 85 1758-61, DiVincenzo et al 2000 Nature 408 339) and its derived versions, e.g. the resonant exchange (RX) qubit, but we also discuss other qubit implementations using three electron spins. For each three-spin qubit we describe the qubit model, the envisioned physical realization, the implementations of single-qubit operations, as well as the read-out and initialization schemes. Two-qubit gates and decoherence properties are discussed for the RX qubit and the exchange-only qubit, thereby completing the list of requirements for quantum computation for a viable candidate qubit implementation. We start by describing the full system of three electrons in a triple quantum dot, then discuss the charge-stability diagram, restricting ourselves to the relevant subsystem, introduce the qubit states, and discuss important transitions to other charge states (Russ et al 2016 Phys. Rev. B 94 165411). Introducing the various qubit implementations, we begin with the exchange-only qubit (DiVincenzo et al 2000 Nature 408 339, Laird et al 2010 Phys. Rev. B 82 075403), followed by the RX qubit (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502), the spin-charge qubit (Kyriakidis and Burkard 2007 Phys. Rev. B 75 115324), and the hybrid qubit (Shi et al 2012 Phys. Rev. Lett. 108 140503, Koh et al 2012 Phys. Rev. Lett. 109 250503, Cao et al 2016 Phys. Rev. Lett. 116 086801, Thorgrimsson et al 2016 arXiv:1611.04945). The main focus will be on the exchange-only qubit and its modification, the RX qubit, whose single-qubit operations are realized by driving the qubit at its resonant frequency in the microwave range similar to electron spin resonance. Two different types of two-qubit operations are presented for the exchange-only qubits which can be divided into short-ranged and long-ranged interactions. Both of these interaction types are expected to be necessary in a large-scale quantum computer. The short-ranged interactions use the exchange coupling by placing qubits next to each other and applying exchange-pulses (DiVincenzo et al 2000 Nature 408 339, Fong and Wandzura 2011 Quantum Inf. Comput. 11 1003, Setiawan et al 2014 Phys. Rev. B 89 085314, Zeuch et al 2014 Phys. Rev. B 90 045306, Doherty and Wardrop 2013 Phys. Rev. Lett. 111 050503, Shim and Tahan 2016 Phys. Rev. B 93 121410), while the long-ranged interactions use the photons of a superconducting microwave cavity as a mediator in order to couple two qubits over long distances (Russ and Burkard 2015 Phys. Rev. B 92 205412, Srinivasa et al 2016 Phys. Rev. B 94 205421). The nature of the three-electron qubit states each having the same total spin and total spin in z-direction (same Zeeman energy) provides a natural protection against several sources of noise (DiVincenzo et al 2000 Nature 408 339, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Kempe et al 2001 Phys. Rev. A 63 042307, Russ and Burkard 2015 Phys. Rev. B 91 235411). The price to pay for this advantage is an increase in gate complexity. We also take into account the decoherence of the qubit through the influence of magnetic noise (Ladd 2012 Phys. Rev. B 86 125408, Mehl and DiVincenzo 2013 Phys. Rev. B 87 195309, Hung et al 2014 Phys. Rev. B 90 045308), in particular dephasing due to the presence of nuclear spins, as well as dephasing due to charge noise (Medford et al 2013 Phys. Rev. Lett. 111 050501, Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434), fluctuations of the energy levels on each dot due to noisy gate voltages or the environment. Several techniques are discussed which partly decouple the qubit from magnetic noise (Setiawan et al 2014 Phys. Rev. B 89 085314, West and Fong 2012 New J. Phys. 14 083002, Rohling and Burkard 2016 Phys. Rev. B 93 205434) while for charge noise it is shown that it is favorable to operate the qubit on the so-called '(double) sweet spots' (Taylor et al 2013 Phys. Rev. Lett. 111 050502, Shim and Tahan 2016 Phys. Rev. B 93 121410, Russ and Burkard 2015 Phys. Rev. B 91 235411, Fei et al 2015 Phys. Rev. B 91 205434, Malinowski et al 2017 arXiv: 1704.01298), which are least susceptible to noise, thus providing a longer lifetime of the qubit.",
"title": ""
}
] |
scidocsrr
|
238edce4e235ab624ed3470fe656eeb6
|
Transfer Learning in Brain-Computer Interfaces with Adversarial Variational Autoencoders
|
[
{
"docid": "69b80da8e9da955cd4514f4d9e648374",
"text": "The performance of brain-computer interfaces (BCIs) improves with the amount of available training data; the statistical distribution of this data, however, varies across subjects as well as across sessions within individual subjects, limiting the transferability of training data or trained models between them. In this article, we review current transfer learning techniques in BCIs that exploit shared structure between training data of multiple subjects and/or sessions to increase performance. We then present a framework for transfer learning in the context of BCIs that can be applied to any arbitrary feature space, as well as a novel regression estimation method that is specifically designed for the structure of a system based on the electroencephalogram (EEG). We demonstrate the utility of our framework and method on subject-to-subject transfer in a motor-imagery paradigm as well as on session-to-session transfer in one patient diagnosed with amyotrophic lateral sclerosis (ALS), showing that it is able to outperform other comparable methods on an identical dataset.",
"title": ""
},
{
"docid": "62a0b14c86df32d889d43eb484eadcda",
"text": "Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) classification. Most of existing CSP-based methods exploit covariance matrices on a subject-by-subject basis so that inter-subject information is neglected. In this paper we present modifications of CSP for subject-to-subject transfer, where we exploit a linear combination of covariance matrices of subjects in consideration. We develop two methods to determine a composite covariance matrix that is a weighted sum of covariance matrices involving subjects, leading to composite CSP. Numerical experiments on dataset IVa in BCI competition III confirm that our composite CSP methods improve classification performance over the standard CSP (on a subject-by-subject basis), especially in the case of subjects with fewer number of training samples.",
"title": ""
},
{
"docid": "c9a2150bc7a0fe419249189eb5a5a53a",
"text": "One of the challenges in modeling cognitive events from electroencephalogram (EEG) data is finding representations that are invariant to interand intra-subject differences, as well as to inherent noise associated with such data. Herein, we propose a novel approach for learning such representations from multi-channel EEG time-series, and demonstrate its advantages in the context of mental load classification task. First, we transform EEG activities into a sequence of topologypreserving multi-spectral images, as opposed to standard EEG analysis techniques that ignore such spatial information. Next, we train a deep recurrent-convolutional network inspired by state-of-the-art video classification to learn robust representations from the sequence of images. The proposed approach is designed to preserve the spatial, spectral, and temporal structure of EEG which leads to finding features that are less sensitive to variations and distortions within each dimension. Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field.",
"title": ""
}
] |
[
{
"docid": "d4488867e774e28abc2b960a9434d052",
"text": "Understanding how images of objects and scenes behave in response to specific egomotions is a crucial aspect of proper visual development, yet existing visual learning methods are conspicuously disconnected from the physical source of their images. We propose a new “embodied” visual learning paradigm, exploiting proprioceptive motor signals to train visual representations from egocentric video with no manual supervision. Specifically, we enforce that our learned features exhibit equivariance i.e., they respond predictably to transformations associated with distinct egomotions. With three datasets, we show that our unsupervised feature learning approach significantly outperforms previous approaches on visual recognition and next-best-view prediction tasks. In the most challenging test, we show that features learned from video captured on an autonomous driving platform improve large-scale scene recognition in static images from a disjoint domain.",
"title": ""
},
{
"docid": "ea143354b7b6bcf5fb6b3cfdfba6b062",
"text": "Astaxanthin (1), a red-orange carotenoid pigment, is a powerful biological antioxidant that occurs naturally in a wide variety of living organisms. The potent antioxidant property of 1 has been implicated in its various biological activities demonstrated in both experimental animals and clinical studies. Compound 1 has considerable potential and promising applications in human health and nutrition. In this review, the recent scientific literature (from 2002 to 2005) is covered on the most significant activities of 1, including its antioxidative and anti-inflammatory properties, its effects on cancer, diabetes, the immune system, and ocular health, and other related aspects. We also discuss the green microalga Haematococcus pluvialis, the richest source of natural 1, and its utilization in the promotion of human health, including the antihypertensive and neuroprotective potentials of 1, emphasizing our experimental data on the effects of dietary astaxanthin on blood pressure, stroke, and vascular dementia in animal models, is described.",
"title": ""
},
{
"docid": "cff4bffb3e29f88dddca8b22433c0db6",
"text": "Electronic portal imaging devices (EPIDs) have been the preferred tools for verification of patient positioning for radiotherapy in recent decades. Since EPID images contain dose information, many groups have investigated their use for radiotherapy dose measurement. With the introduction of the amorphous-silicon EPIDs, the interest in EPID dosimetry has been accelerated because of the favourable characteristics such as fast image acquisition, high resolution, digital format, and potential for in vivo measurements and 3D dose verification. As a result, the number of publications dealing with EPID dosimetry has increased considerably over the past approximately 15 years. The purpose of this paper was to review the information provided in these publications. Information available in the literature included dosimetric characteristics and calibration procedures of various types of EPIDs, strategies to use EPIDs for dose verification, clinical approaches to EPID dosimetry, ranging from point dose to full 3D dose distribution verification, and current clinical experience. Quality control of a linear accelerator, pre-treatment dose verification and in vivo dosimetry using EPIDs are now routinely used in a growing number of clinics. The use of EPIDs for dosimetry purposes has matured and is now a reliable and accurate dose verification method that can be used in a large number of situations. Methods to integrate 3D in vivo dosimetry and image-guided radiotherapy (IGRT) procedures, such as the use of kV or MV cone-beam CT, are under development. It has been shown that EPID dosimetry can play an integral role in the total chain of verification procedures that are implemented in a radiotherapy department. It provides a safety net for simple to advanced treatments, as well as a full account of the dose delivered. Despite these favourable characteristics and the vast range of publications on the subject, there is still a lack of commercially available solutions for EPID dosimetry. As strategies evolve and commercial products become available, EPID dosimetry has the potential to become an accurate and efficient means of large-scale patient-specific IMRT dose verification for any radiotherapy department.",
"title": ""
},
{
"docid": "d7cc1619647d83911ad65fac9637ef03",
"text": "We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 4 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures.",
"title": ""
},
{
"docid": "3c548cf1888197545dc8b9cee100039a",
"text": "Williams syndrome is caused by a microdeletion of at least 16 genes on chromosome 7q11.23. The syndrome results in mild to moderate mental retardation or learning disability. The behavioral phenotype for Williams syndrome is characterized by a distinctive cognitive profile and an unusual personality profile. Relative to overall level of intellectual ability, individuals with Williams syndrome typically show a clear strength in auditory rote memory, a strength in language, and an extreme weakness in visuospatial construction. The personality of individuals with Williams syndrome involves high sociability, overfriendliness, and empathy, with an undercurrent of anxiety related to social situations. The adaptive behavior profile for Williams syndrome involves clear strength in socialization skills (especially interpersonal skills related to initiating social interaction), strength in communication, and clear weakness in daily living skills and motor skills, relative to overall level of adaptive behavior functioning. Literature relevant to each of the components of the Williams syndrome behavioral phenotype is reviewed, including operationalizations of the Williams syndrome cognitive profile and the Williams syndrome personality profile. The sensitivity and specificity of these profiles for Williams syndrome, relative to individuals with other syndromes or mental retardation or borderline normal intelligence of unknown etiology, is considered. The adaptive behavior profile is discussed in relation to the cognitive and personality profiles. The importance of operationalizations of crucial components of the behavioral phenotype for the study of genotype/phenotype correlations in Williams syndrome is stressed. MRDD Research Reviews 2000;6:148-158.",
"title": ""
},
{
"docid": "0784c4f87530aab020dbb8f15cba3127",
"text": "As mechanical end-effectors, microgrippers enable the pick–transport–place of micrometer-sized objects, such as manipulation and positioning of biological cells in an aqueous environment. This paper reports on a monolithic MEMS-based microgripper with integrated force feedback along two axes and presents the first demonstration of forcecontrolled micro-grasping at the nanonewton force level. The system manipulates highly deformable biomaterials (porcine interstitial cells) in an aqueous environment using a microgripper that integrates a V-beam electrothermal microactuator and two capacitive force sensors, one for contact detection (force resolution: 38.5 nN) and the other for gripping force measurements (force resolution: 19.9 nN). The MEMS-based microgripper and the force control system experimentally demonstrate the capability of rapid contact detection and reliable force-controlled micrograsping to accommodate variations in size and mechanical properties of objects with a high reproducibility. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "5015d853665e2642add922290b28b685",
"text": "What is CRM Customer relationship Management (CRM) appears to be a simple and straightforward concept, but there are many different definitions and implementations of CRM. At present, a number of different conceptual understandings are associated with the term \"Customer Relationship Management (CRM). There understanding range from IT driven programs designed to optimize customer contact to comprehensive approaches for the establishment and design of long-term relationships. The effort to establish a meaningful relationship with the customer is characteristic of this last understanding (Barnes 2003).",
"title": ""
},
{
"docid": "54260da63de773aa9374ab00917c2977",
"text": "A slew rate controlled output driver adopting delay compensation method is implemented using 0.18 µm CMOS process for storage device interface. Phase-Locked Loop is used to generate compensation current and constant delay time. Compensation current reduces the slew rate variation over process, voltage and temperature variation in output driver. To generate constant delay time, the replica of VCO in PLL is used in output driver's slew rate control block. That reduces the slew rate variation over load capacitance variation. That has less 25% variation at slew rate than that of conventional output driver. The proposed output driver can satisfy UDMA100 interface which specify load capacitance as 15 ∼ 40pF and slew rate as 0.4 ∼ 1.0[V/ns].",
"title": ""
},
{
"docid": "4eda25ffa01bb177a41a1d6d82db6a0c",
"text": "For ontologiesto becost-efectively deployed,we requirea clearunderstandingof thevariouswaysthatontologiesarebeingusedtoday. To achieve this end,we presenta framework for understandingandclassifyingontology applications.We identify four main categoriesof ontologyapplications:1) neutralauthoring,2) ontologyasspecification, 3) commonaccessto information, and4) ontology-basedsearch. In eachcategory, we identify specific ontologyapplicationscenarios.For each,we indicatetheir intendedpurpose,therole of theontology, thesupporting technologies, who theprincipalactorsareandwhat they do. We illuminatethesimilaritiesanddifferencesbetween scenarios. We draw on work from othercommunities,suchassoftwaredevelopersandstandardsorganizations.We usea relatively broaddefinition of ‘ontology’, to show that muchof the work beingdoneby thosecommunitiesmay be viewedaspracticalapplicationsof ontologies.Thecommonthreadis theneedfor sharingthemeaningof termsin a givendomain,which is a centralrole of ontologies.An additionalaim of this paperis to draw attentionto common goalsandsupportingtechnologiesof theserelatively distinctcommunitiesto facilitateclosercooperationandfaster progress .",
"title": ""
},
{
"docid": "498d27f4aaf9249f6f1d6a6ae5554d0e",
"text": "Association rules are ”if-then rules” with two measures which quantify the support and confidence of the rule for a given data set. Having their origin in market basked analysis, association rules are now one of the most popular tools in data mining. This popularity is to a large part due to the availability of efficient algorithms. The first and arguably most influential algorithm for efficient association rule discovery is Apriori. In the following we will review basic concepts of association rule discovery including support, confidence, the apriori property, constraints and parallel algorithms. The core consists of a review of the most important algorithms for association rule discovery. Some familiarity with concepts like predicates, probability, expectation and random variables is assumed.",
"title": ""
},
{
"docid": "9a1986c78681a8601d760dccf57f4302",
"text": "Perceptron training is widely applied in the natural language processing community for learning complex structured models. Like all structured prediction learning frameworks, the structured perceptron can be costly to train as training complexity is proportional to inference, which is frequently non-linear in example sequence length. In this paper we investigate distributed training strategies for the structured perceptron as a means to reduce training times when computing clusters are available. We look at two strategies and provide convergence bounds for a particular mode of distributed structured perceptron training based on iterative parameter mixing (or averaging). We present experiments on two structured prediction problems – namedentity recognition and dependency parsing – to highlight the efficiency of this method.",
"title": ""
},
{
"docid": "c51cb80a1a5afe25b16a5772ccee0e6b",
"text": "Face perception relies on computations carried out in face-selective cortical areas. These areas have been intensively investigated for two decades, and this work has been guided by an influential neural model suggested by Haxby and colleagues in 2000. Here, we review new findings about face-selective areas that suggest the need for modifications and additions to the Haxby model. We suggest a revised framework based on (a) evidence for multiple routes from early visual areas into the face-processing system, (b) information about the temporal characteristics of these areas, (c) indications that the fusiform face area contributes to the perception of changeable aspects of faces, (d) the greatly elevated responses to dynamic compared with static faces in dorsal face-selective brain areas, and (e) the identification of three new anterior face-selective areas. Together, these findings lead us to suggest that face perception depends on two separate pathways: a ventral stream that represents form information and a dorsal stream driven by motion and form information.",
"title": ""
},
{
"docid": "4ed00fa5cc0021360f726696470e24fc",
"text": "Why do some developing country governments accumula te large foreign debts while others do not? I hypothesize that variation in fore ign borrowing is a function of variation in the breadth of public participation in the polit ical process. Specifically, governments borrow less when political institutions enable broa d public participation in the political process and encourage the revelation of information about executive behavior. I test this hypothesis against the experience of seventy-eight developing countries between 1976 and 1998. The analysis suggests that governments in soc eties with broad public participation borrow less heavily than governments i ocieties with limited public participation. In short, democracies borrowed less heavily than autocracies. The analysis has implications for the likely consequences of the recent debt relief initiative.",
"title": ""
},
{
"docid": "050dd71858325edd4c1a42fc1a25de95",
"text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.",
"title": ""
},
{
"docid": "187fcbf0a52de7dd7de30f8846b34e1e",
"text": "Goal-oriented dialogue systems typically rely on components specifically developed for a single task or domain. This limits such systems in two different ways: If there is an update in the task domain, the dialogue system usually needs to be updated or completely re-trained. It is also harder to extend such dialogue systems to different and multiple domains. The dialogue state tracker in conventional dialogue systems is one such component — it is usually designed to fit a welldefined application domain. For example, it is common for a state variable to be a categorical distribution over a manually-predefined set of entities (Henderson et al., 2013), resulting in an inflexible and hard-to-extend dialogue system. In this paper, we propose a new approach for dialogue state tracking that can generalize well over multiple domains without incorporating any domain-specific knowledge. Under this framework, discrete dialogue state variables are learned independently and the information of a predefined set of possible values for dialogue state variables is not required. Furthermore, it enables adding arbitrary dialogue context as features and allows for multiple values to be associated with a single state variable. These characteristics make it much easier to expand the dialogue state space. We evaluate our framework using the widely used dialogue state tracking challenge data set (DSTC2) and show that our framework yields competitive results with other state-of-the-art results despite incorporating little domain knowledge. We also show that this framework can benefit from widely available external resources such as pre-trained word embeddings.",
"title": ""
},
{
"docid": "3fb3715c0c80d2e871b5d7eed4ed5f9a",
"text": "23 24 25 26 27 28 29 30 31 Article history: Available online xxxx",
"title": ""
},
{
"docid": "0ad76c9251d0d7c1a8204eee819149db",
"text": "The design of cancer chemotherapy has become increasingly sophisticated, yet there is no cancer treatment that is 100% effective against disseminated cancer. Resistance to treatment with anticancer drugs results from a variety of factors including individual variations in patients and somatic cell genetic differences in tumors, even those from the same tissue of origin. Frequently resistance is intrinsic to the cancer, but as therapy becomes more and more effective, acquired resistance has also become common. The most common reason for acquisition of resistance to a broad range of anticancer drugs is expression of one or more energy-dependent transporters that detect and eject anticancer drugs from cells, but other mechanisms of resistance including insensitivity to drug-induced apoptosis and induction of drug-detoxifying mechanisms probably play an important role in acquired anticancer drug resistance. Studies on mechanisms of cancer drug resistance have yielded important information about how to circumvent this resistance to improve cancer chemotherapy and have implications for pharmacokinetics of many commonly used drugs.",
"title": ""
},
{
"docid": "836eb904c483cd157807302997dd1aac",
"text": "Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS’ single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS’ capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.",
"title": ""
},
{
"docid": "f12cbeb6a202ea8911a67abe3ffa6ccc",
"text": "In order to enhance the study of the kinematics of any robot arm, parameter design is directed according to certain necessities for the robot, and its forward and inverse kinematics are discussed. The DH convention Method is used to form the kinematical equation of the resultant structure. In addition, the Robotics equations are modeled in MATLAB to create a 3D visual simulation of the robot arm to show the result of the trajectory planning algorithms. The simulation has detected the movement of each joint of the robot arm, and tested the parameters, thus accomplishing the predetermined goal which is drawing a sine wave on a writing board.",
"title": ""
},
{
"docid": "0ec8f9610a7f02b311396a18ea55eaed",
"text": "Mental disorders are highly prevalent and cause considerable suffering and disease burden. To compound this public health problem, many individuals with psychiatric disorders remain untreated although effective treatments exist. We examine the extent of this treatment gap. We reviewed community-based psychiatric epidemiology studies that used standardized diagnostic instruments and included data on the percentage of individuals receiving care for schizophrenia and other non-affective psychotic disorders, major depression, dysthymia, bipolar disorder, generalized anxiety disorder (GAD), panic disorder, obsessive-compulsive disorder (OCD), and alcohol abuse or dependence. The median rates of untreated cases of these disorders were calculated across the studies. Examples of the estimation of the treatment gap for WHO regions are also presented. Thirty-seven studies had information on service utilization. The median treatment gap for schizophrenia, including other non-affective psychosis, was 32.2%. For other disorders the gap was: depression, 56.3%; dysthymia, 56.0%; bipolar disorder, 50.2%; panic disorder, 55.9%; GAD, 57.5%; and OCD, 57.3%. Alcohol abuse and dependence had the widest treatment gap at 78.1%. The treatment gap for mental disorders is universally large, though it varies across regions. It is likely that the gap reported here is an underestimate due to the unavailability of community-based data from developing countries where services are scarcer. To address this major public health challenge, WHO has adopted in 2002 a global action programme that has been endorsed by the Member States.",
"title": ""
}
] |
scidocsrr
|
9c33bd10e001f3ae096a07a1b535252e
|
Multiscale Rotated Bounding Box-Based Deep Learning Method for Detecting Ship Targets in Remote Sensing Images
|
[
{
"docid": "9c74b77e79217602bb21a36a5787ed59",
"text": "Ship detection on spaceborne images has attracted great interest in the applications of maritime security and traffic control. Optical images stand out from other remote sensing images in object detection due to their higher resolution and more visualized contents. However, most of the popular techniques for ship detection from optical spaceborne images have two shortcomings: 1) Compared with infrared and synthetic aperture radar images, their results are affected by weather conditions, like clouds and ocean waves, and 2) the higher resolution results in larger data volume, which makes processing more difficult. Most of the previous works mainly focus on solving the first problem by improving segmentation or classification with complicated algorithms. These methods face difficulty in efficiently balancing performance and complexity. In this paper, we propose a ship detection approach to solving the aforementioned two issues using wavelet coefficients extracted from JPEG2000 compressed domain combined with deep neural network (DNN) and extreme learning machine (ELM). Compressed domain is adopted for fast ship candidate extraction, DNN is exploited for high-level feature representation and classification, and ELM is used for efficient feature pooling and decision making. Extensive experiments demonstrate that, in comparison with the existing relevant state-of-the-art approaches, the proposed method requires less detection time and achieves higher detection accuracy.",
"title": ""
}
] |
[
{
"docid": "fec50e53536febc02b8fe832a97cf833",
"text": "Translational control plays a critical role in the regulation of gene expression in eukaryotes and affects many essential cellular processes, including proliferation, apoptosis and differentiation. Under most circumstances, translational control occurs at the initiation step at which the ribosome is recruited to the mRNA. The eukaryotic translation initiation factor 4E (eIF4E), as part of the eIF4F complex, interacts first with the mRNA and facilitates the recruitment of the 40S ribosomal subunit. The activity of eIF4E is regulated at many levels, most profoundly by two major signalling pathways: PI3K (phosphoinositide 3-kinase)/Akt (also known and Protein Kinase B, PKB)/mTOR (mechanistic/mammalian target of rapamycin) and Ras (rat sarcoma)/MAPK (mitogen-activated protein kinase)/Mnk (MAPK-interacting kinases). mTOR directly phosphorylates the 4E-BPs (eIF4E-binding proteins), which are inhibitors of eIF4E, to relieve translational suppression, whereas Mnk phosphorylates eIF4E to stimulate translation. Hyperactivation of these pathways occurs in the majority of cancers, which results in increased eIF4E activity. Thus, translational control via eIF4E acts as a convergence point for hyperactive signalling pathways to promote tumorigenesis. Consequently, recent works have aimed to target these pathways and ultimately the translational machinery for cancer therapy.",
"title": ""
},
{
"docid": "36a538b833de4415d12cd3aa5103cf9b",
"text": "Big data is an opportunity in the emergence of novel business applications such as “Big Data Analytics” (BDA). However, these data with non-traditional volumes create a real problem given the capacity constraints of traditional systems. The aim of this paper is to deal with the impact of big data in a decision-support environment and more particularly in the data integration phase. In this context, we developed a platform, called P-ETL (Parallel-ETL) for extracting (E), transforming (T) and loading (L) very large data in a data warehouse (DW). To cope with very large data, ETL processes under our P-ETL platform run on a cluster of computers in parallel way with MapReduce paradigm. The conducted experiment shows mainly that increasing tasks dealing with large data speeds-up the ETL process.",
"title": ""
},
{
"docid": "6eaa0d1b6a7e55eca070381954638292",
"text": "Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.",
"title": ""
},
{
"docid": "a6d4b6a0cd71a8e64c9a2429b95cd7da",
"text": "Creativity research has traditionally focused on human creativity, and even more specifically, on the psychology of individual creative people. In contrast, computational creativity research involves the development and evaluation of creativity in a computational system. As we study the effect of scaling up from the creativity of a computational system and individual people to large numbers of diverse computational agents and people, we have a new perspective: creativity can ascribed to a computational agent, an individual person, collectives of people and agents and/or their interaction. By asking “Who is being creative?” this paper examines the source of creativity in computational and collective creativity. A framework based on ideation and interaction provides a way of characterizing existing research in computational and collective creativity and identifying directions for future research. Human and Computational Creativity Creativity is a topic of philosophical and scientific study considering the scenarios and human characteristics that facilitate creativity as well as the properties of computational systems that exhibit creative behavior. “The four Ps of creativity”, as introduced in Rhodes (1987) and more recently summarized by Runco (2011), decompose the complexity of creativity into separate but related influences: • Person: characteristics of the individual, • Product: an outcome focus on ideas, • Press: the environmental and contextual factors, • Process: cognitive process and thinking techniques. While the four Ps are presented in the context of the psychology of human creativity, they can be modified for computational creativity if process includes a computational process. The study of human creativity has a focus on the characteristics and cognitive behavior of creative people and the environments in which creativity is facilitated. The study of computational creativity, while inspired by concepts of human creativity, is often expressed in the formal language of search spaces and algorithms. Why do we ask who is being creative? Firstly, there is an increasing interest in understanding computational systems that can formalize or model creative processes and therefore exhibit creative behaviors or acts. Yet there are still skeptics that claim computers aren’t creative, the computer is just following instructions. Second and in contrast, there is increasing interest in computational systems that encourage and enhance human creativity that make no claims about whether the computer is being or could be creative. Finally, as we develop more capable socially intelligent computational systems and systems that enable collective intelligence among humans and computers, the boundary between human creativity and computer creativity blurs. As the boundary blurs, we need to develop ways of recognizing creativity that makes no assumptions about whether the creative entity is a person, a computer, a potentially large group of people, or the collective intelligence of human and computational entities. This paper presents a framework that characterizes the source of creativity from two perspectives, ideation and interaction, as a guide to current and future research in computational and collective creativity. Creativity: Process and Product Understanding the nature of creativity as process and product is critical in computational creativity if we want to avoid any bias that only humans are creative and computers are not. While process and product in creativity are tightly coupled in practice, a distinction between the two provides two ways of recognizing computational creativity by describing the characteristics of a creative process and separately, the characteristics of a creative product. Studying and describing the processes that generate creative products focus on the cognitive behavior of a creative person or the properties of a computational system, and describing ways of recognizing a creative product focus on the characteristics of the result of a creative process. When describing creative processes there is an assumption that there is a space of possibilities. Boden (2003) refers to this as conceptual spaces and describes these spaces as structured styles of thought. In computational systems such a space is called a state space. How such spaces are changed, or the relationship between the set of known products, the space of possibilities, and the potentially creative product, is the basis for describing processes that can generate potentially creative artifacts. There are many accounts of the processes for generating creative products. Two sources are described here: Boden (2003) from the philosophical and artificial intelligence perspective and Gero (2000) from the design science perspective. Boden (2003) describes three ways in which creative products can be generated: combination, exploration, International Conference on Computational Creativity 2012 67 and transformation: each one describes the way in which the conceptual space of known products provides a basis for generating a creative product and how the conceptual space changes as a result of the creative artifact. Combination brings together two or more concepts in ways that hasn’t occurred in existing products. Exploration finds concepts in parts of the space that have not been considered in existing products. Transformation modifies concepts in the space to generate products that change the boundaries of the space. Gero (2000) describes computational processes for creative design as combination, transformation, analogy, emergence, and first principles. Combination and transformation are similar to Boden’s processes. Analogy transfers concepts from a source product that may be in a different conceptual space to a target product to generate a novel product in the target’s space. Emergence is a process that finds new underlying structures in a concept that give rise to a new product, effectively a re-representation process. First principles as a process generates new products without relying on concepts as defined in existing products. While these processes provide insight into the nature of creativity and provide a basis for computational creativity, they have little to say about how we recognize a creative product. As we move towards computational systems that enhance or contribute to human creativity, the articulation of process models for generating creative artifacts does not provide an evaluation of the product. Computational systems that generate creative products need evaluation criteria that are independent of the process by which the product was generated. There are also numerous approaches to defining characteristics of creative products as the basis for evaluating or assessing creativity. Boden (2003) claims that novelty and value are the essential criteria and that other aspects, such as surprise, are kinds of novelty or value. Wiggins (2006) often uses value to indicate all valuable aspects of a creative products, yet provides definitions for novelty and value as different features that are relevant to creativity. Oman and Tumer (2009) combine novelty and quality to evaluate individual ideas in engineering design as a relative measure of creativity. Shah, Smith, and Vargas-Hernandez (2003) associate creative design with ideation and develop metrics for novelty, variety, quality, and quantity of ideas. Wiggins (2006) argues that surprise is a property of the receiver of a creative artifact, that is, it is an emotional response. Cropley and Cropley (2005) propose four broad properties of products that can be used to describe the level and kind of creativity they possess: effectiveness, novelty, elegance, genesis. Besemer and O'Quin (1987) describe a Creative Product Semantic Scale which defines the creativity of products in three dimensions: novelty (the product is original, surprising and germinal), resolution (the product is valuable, logical, useful, and understandable), and elaboration and synthesis (the product is organic, elegant, complex, and well-crafted). Horn and Salvendy (2006) after doing an analysis of many properties of creative products, report on consumer perception of creativity in three critical perceptions: affect (our emotional response to the product), importance, and novelty. Goldenberg and Mazursky (2002) report on research that has found the observable characteristics of creativity in products to include \"original, of value, novel, interesting, elegant, unique, surprising.\" Amabile (1982) says it most clearly when she summarizes the social psychology literature on the assessment of creativity: While most definitions of creativity refer to novelty, appropriateness, and surprise, current creativity tests or assessment techniques are not closely linked to these criteria. She further argues that “There is no clear, explicit statement of the criteria that conceptually underlie the assessment procedures.” In response to an inability to establish and define criteria for evaluating creativity that is acceptable to all domains, Amabile (1982, 1996) introduced a Consensual Assessment Technique (CAT) in which creativity is assessed by a group of judges that are knowledgeable of the field. Since then, several scales for assisting human evaluators have been developed to guide human evaluators, for example, Besemer and O'Quin's (1999) Creative Product Semantic Scale, Reis and Renzulli's (1991) Student Product Assessment Form, and Cropley et al’s (2011) Creative Solution Diagnosis Scale. Maher (2010) presents an AI approach to evaluating creativity of a product by measuring novelty, value and surprise that provides a formal model for evaluating creative products. Novelty is a measure of how different the product is from existing products and is measured as a distance from clusters of other products in a conceptual space, characterizing the artifact as similar but different. Value is a measure of how the creative product co",
"title": ""
},
{
"docid": "179c5bc5044d85c2597d41b1bd5658b3",
"text": "Embedding models typically associate each word with a single real-valued vector, representing its different properties. Evaluation methods, therefore, need to analyze the accuracy and completeness of these properties in embeddings. This requires fine-grained analysis of embedding subspaces. Multi-label classification is an appropriate way to do so. We propose a new evaluation method for word embeddings based on multi-label classification given a word embedding. The task we use is finegrained name typing: given a large corpus, find all types that a name can refer to based on the name embedding. Given the scale of entities in knowledge bases, we can build datasets for this task that are complementary to the current embedding evaluation datasets in: they are very large, contain fine-grained classes, and allow the direct evaluation of embeddings without confounding factors like sentence context.",
"title": ""
},
{
"docid": "3611d022aee93b9cbcc961bb7cbdd3ff",
"text": "Due to the popularity of Deep Neural Network (DNN) models, we have witnessed extreme-scale DNN models with the continued increase of the scale in terms of depth and width. However, the extremely high memory requirements for them make it difficult to run the training processes on single many-core architectures such as a Graphic Processing Unit (GPU), which compels researchers to use model parallelism over multiple GPUs to make it work. However, model parallelism always brings very heavy additional overhead. Therefore, running an extreme-scale model in a single GPU is urgently required. There still exist several challenges to reduce the memory footprint for extreme-scale deep learning. To address this tough problem, we first identify the memory usage characteristics for deep and wide convolutional networks, and demonstrate the opportunities for memory reuse at both the intra-layer and inter-layer levels. We then present Layrub, a runtime data placement strategy that orchestrates the execution of the training process. It achieves layer-centric reuse to reduce memory consumption for extreme-scale deep learning that could not previously be run on a single GPU. Experiments show that, compared to the original Caffe, Layrub can cut down the memory usage rate by an average of 58.2% and by up to 98.9%, at the moderate cost of 24.1% higher training execution time on average. Results also show that Layrub outperforms some popular deep learning systems such as GeePS, vDNN, MXNet, and Tensorflow. More importantly, Layrub can tackle extreme-scale deep learning tasks. For example, it makes an extra-deep ResNet with 1,517 layers that can be trained successfully in one GPU with 12GB memory, while other existing deep learning systems cannot.",
"title": ""
},
{
"docid": "49c9ccdf36b60f1a8778919fe8ad3ad2",
"text": "Formal evaluations conducted by NIST in 1996 demonstrated that systems that used parallel banks of tokenizer-dependent language models produced the best language identification performance. Since that time, other approaches to language identification have been developed that match or surpass the performance of phone-based systems. This paper describes and evaluates three techniques that have been applied to the language identification problem: phone recognition, Gaussian mixture modeling, and support vector machine classification. A recognizer that fuses the scores of three systems that employ these techniques produces a 2.7% equal error rate (EER) on the 1996 NIST evaluation set and a 2.8% EER on the NIST 2003 primary condition evaluation set. An approach to dealing with the problem of out-of-set data is also discussed.",
"title": ""
},
{
"docid": "867a6923a650bdb1d1ec4f04cda37713",
"text": "We examine Gärdenfors’ theory of conceptual spaces, a geometrical form of knowledge representation (Conceptual spaces: The geometry of thought, MIT Press, Cambridge, 2000), in the context of the general Creative Systems Framework introduced by Wiggins (J Knowl Based Syst 19(7):449–458, 2006a; New Generation Comput 24(3):209–222, 2006b). Gärdenfors’ theory offers a way of bridging the traditional divide between symbolic and sub-symbolic representations, as well as the gap between representational formalism and meaning as perceived by human minds. We discuss how both these qualities may be advantageous from the point of view of artificial creative systems. We take music as our example domain, and discuss how a range of musical qualities may be instantiated as conceptual spaces, and present a detailed conceptual space formalisation of musical metre.",
"title": ""
},
{
"docid": "c8ca57db545f2d1f70f3640651bb3e79",
"text": "sprightly style and is interesting from cover to cover. The comments, critiques, and summaries that accompany the chapters are very helpful in crystalizing the ideas and answering questions that may arise, particularly to the self-learner. The transparency in the presentation of the material in the book equips the reader to proceed quickly to a wealth of problems included at the end of each chapter. These problems ranging from elementary to research-level are very valuable in that a solid working knowledge of the invariant imbedding techniques is acquired as well as good insight in attacking problems in various applied areas. Furthermore, a useful selection of references is given at the end of each chapter. This book may not appeal to those mathematicians who are interested primarily in the sophistication of mathematical theory, because the authors have deliberately avoided all pseudo-sophistication in attaining transparency of exposition. Precisely for the same reason the majority of the intended readers who are applications-oriented and are eager to use the techniques quickly in their own fields will welcome and appreciate the efforts put into writing this book. From a purely mathematical point of view, some of the invariant imbedding results may be considered to be generalizations of the classical theory of first-order partial differential equations, and a part of the analysis of invariant imbedding is still at a somewhat heuristic stage despite successes in many computational applications. However, those who are concerned with mathematical rigor will find opportunities to explore the foundations of the invariant imbedding method. In conclusion, let me quote the following: \"What is the best method to obtain the solution to a problem'? The answer is, any way that works.\" (Richard P. Feyman, Engineering and Science, March 1965, Vol. XXVIII, no. 6, p. 9.) In this well-written book, Bellman and Wing have indeed accomplished the task of introducing the simplicity of the invariant imbedding method to tackle various problems of interest to engineers, physicists, applied mathematicians, and numerical analysts.",
"title": ""
},
{
"docid": "7321e113293a7198bf88a1744a7ca6c9",
"text": "It is widely claimed that research to discover and develop new pharmaceuticals entails high costs and high risks. High research and development (R&D) costs influence many decisions and policy discussions about how to reduce global health disparities, how much companies can afford to discount prices for lowerand middle-income countries, and how to design innovative incentives to advance research on diseases of the poor. High estimated costs also affect strategies for getting new medicines to the world’s poor, such as the advanced market commitment, which built high estimates into its inflated size and prices. This article takes apart the most detailed and authoritative study of R&D costs in order to show how high estimates have been constructed by industry-supported economists, and to show how much lower actual costs may be. Besides serving as an object lesson in the construction of ‘facts’, this analysis provides reason to believe that R&D costs need not be such an insuperable obstacle to the development of better medicines. The deeper problem is that current incentives reward companies to develop mainly new medicines of little advantage and compete for market share at high prices, rather than to develop clinically superior medicines with public funding so that prices could be much lower and risks to companies lower as well. BioSocieties advance online publication, 7 February 2011; doi:10.1057/biosoc.2010.40",
"title": ""
},
{
"docid": "b39a47adecae9b552a32f890569a0d1b",
"text": "Since they are potentially more efficient and simpler in construction, as well as being easier to integrate, electromechanical actuation systems are being considered as an alternative to hydraulic systems for controlling clutches and gearshifts in vehicle transmissions. A high-force, direct-drive linear electromechanical actuator has been developed which acts directly on the shift rails of either an automated manual transmission (AMT) or a dual clutch transmission (DCT) to facilitate gear selection and provide shift-by-wire functionality. It offers a number of advantages over electromechanical systems based on electric motors and gearboxes in that it reduces mechanical hysteresis, backlash and compliance, has fewer components, is more robust, and exhibits a better dynamic response",
"title": ""
},
{
"docid": "2ab7cfe4978d09fde9f0bbef9850f3cf",
"text": "We propose novel tensor decomposition methods that advocate both properties of sparsity and robustness to outliers. The sparsity enables us to extract some essential features from a big data that are easily interpretable. The robustness ensures the resistance to outliers that appear commonly in high-dimensional data. We first propose a method that generalizes the ridge regression in M-estimation framework for tensor decompositions. The other approach we propose combines the least absolute deviation (LAD) regression and the least absolute shrinkage operator (LASSO) for the CANDECOMP/PARAFAC (CP) tensor decompositions. We also formulate various robust tensor decomposition methods using different loss functions. The simulation study shows that our robust-sparse methods outperform other general tensor decomposition methods in the presence of outliers.",
"title": ""
},
{
"docid": "8eb84b8d29c8f9b71c92696508c9c580",
"text": "We introduce a novel in-ear sensor which satisfies key design requirements for wearable electroencephalography (EEG)-it is discreet, unobtrusive, and capable of capturing high-quality brain activity from the ear canal. Unlike our initial designs, which utilize custom earpieces and require a costly and time-consuming manufacturing process, we here introduce the generic earpieces to make ear-EEG suitable for immediate and widespread use. Our approach represents a departure from silicone earmoulds to provide a sensor based on a viscoelastic substrate and conductive cloth electrodes, both of which are shown to possess a number of desirable mechanical and electrical properties. Owing to its viscoelastic nature, such an earpiece exhibits good conformance to the shape of the ear canal, thus providing stable electrode-skin interface, while cloth electrodes require only saline solution to establish low impedance contact. The analysis highlights the distinguishing advantages compared with the current state-of-the-art in ear-EEG. We demonstrate that such a device can be readily used for the measurement of various EEG responses.",
"title": ""
},
{
"docid": "09ada66e157c6a99c6317a7cb068367f",
"text": "Experience design is a relatively new approach to product design. While there are several possible starting points in designing for positive experiences, we start with experience goals that state a profound source for a meaningful experience. In this paper, we investigate three design cases that used experience goals as the starting point for both incremental and radical design, and analyse them from the perspective of their potential for design space expansion. Our work addresses the recent call for design research directed toward new interpretations of what could be meaningful to people, which is seen as the source for creating new meanings for products, and thereby, possibly leading to radical innovations. Based on this idea, we think about the design space as a set of possible concepts derived from deep meanings that experience goals help to communicate. According to our initial results from the small-scale touchpoint design cases, the type of experience goals we use seem to have the potential to generate not only incremental but also radical design ideas.",
"title": ""
},
{
"docid": "2733a4bc77e7fc22f426e69ebbf6d697",
"text": "A microwave nano-probing station incorporating home-made MEMS coplanar waveguide (CPW) probes was built inside a scanning electron microscope. The instrumentation proposed is able to measure accurately the guided complex reflection of 1D devices embedded in dedicated CPW micro-structures. As a demonstration, RF impedance characterization of an Indium Arsenide nanowire is exemplary shown up to 6 GHz. Next, optimization of the MEMS probe assembly is experimentally verified by establishing the measurement uncertainty up to 18 GHz.",
"title": ""
},
{
"docid": "36e42f2e4fd2f848eaf82440c2bcbf62",
"text": "Indoor positioning systems (IPSs) locate objects in closed structures such as office buildings, hospitals, stores, factories, and warehouses, where Global Positioning System devices generally do not work. Most available systems apply wireless concepts, optical tracking, and/or ultrasound. This paper presents a standalone IPS using radio frequency identification (RFID) technology. The concept is based on an object carrying an RFID reader module, which reads low-cost passive tags installed next to the object path. A positioning system using a Kalman filter is proposed. The inputs of the proposed algorithm are the measurements of the backscattered signal power propagated from nearby RFID tags and a tag-path position database. The proposed algorithm first estimates the location of the reader, neglecting tag-reader angle-path loss. Based on the location estimate, an iterative procedure is implemented, targeting the estimation of the tag-reader angle-path loss, where the latter is iteratively compensated from the received signal strength information measurement. Experimental results are presented, illustrating the high performance of the proposed positioning system.",
"title": ""
},
{
"docid": "ee510bbe7c7be6e0fb86a32d9f527be1",
"text": "Internet communications with paths that include satellite link face some peculiar challenges, due to the presence of a long propagation wireless channel. In this paper, we propose a performance enhancing proxy (PEP) solution, called PEPsal, which is, to the best of the authors' knowledge, the first open source TCP splitting solution for the GNU/Linux operating systems. PEPsal improves the performance of a TCP connection over a satellite channel making use of the TCP Hybla, a TCP enhancement for satellite networks developed by the authors. The objective of the paper is to present and evaluate the PEPsal architecture, by comparing it with end to end TCP variants (NewReno, SACK, Hybla), considering both performance and reliability issues. Performance is evaluated by making use of a testbed set up at the University of Bologna, to study advanced transport protocols and architectures for Internet satellite communications",
"title": ""
},
{
"docid": "8d31d43bf080e7b57c09917c9b7e15aa",
"text": "We provide 89 challenging simulation environments that range in difficulty. The difficulty of solving a task is linked not only to the number of dimensions in the action space but also to the size and shape of the distribution of configurations the agent experiences. Therefore, we are releasing a number of simulation environments that include randomly generated terrain. The library also provides simple mechanisms to create new environments with different agent morphologies and the option to modify the distribution of generated terrain. We believe using these and other more complex simulations will help push the field closer to creating human-level intelligence.",
"title": ""
},
{
"docid": "fc164dc2d55cec2867a99436d37962a1",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
},
{
"docid": "be9b40cc2e2340249584f7324e26c4d3",
"text": "This paper provides a unified account of two schools of thinking in information retrieval modelling: the generative retrieval focusing on predicting relevant documents given a query, and the discriminative retrieval focusing on predicting relevancy given a query-document pair. We propose a game theoretical minimax game to iteratively optimise both models. On one hand, the discriminative model, aiming to mine signals from labelled and unlabelled data, provides guidance to train the generative model towards fitting the underlying relevance distribution over documents given the query. On the other hand, the generative model, acting as an attacker to the current discriminative model, generates difficult examples for the discriminative model in an adversarial way by minimising its discrimination objective. With the competition between these two models, we show that the unified framework takes advantage of both schools of thinking: (i) the generative model learns to fit the relevance distribution over documents via the signals from the discriminative model, and (ii) the discriminative model is able to exploit the unlabelled data selected by the generative model to achieve a better estimation for document ranking. Our experimental results have demonstrated significant performance gains as much as 23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of applications including web search, item recommendation, and question answering.",
"title": ""
}
] |
scidocsrr
|
c86737c8e111cdc5ba50f642c1cbebfe
|
Internet Voting Using Zcash
|
[
{
"docid": "27329c67322a5ed2c4f2a7dd6ceb79a8",
"text": "In the world’s largest-ever deployment of online voting, the iVote Internet voting system was trusted for the return of 280,000 ballots in the 2015 state election in New South Wales, Australia. During the election, we performed an independent security analysis of parts of the live iVote system and uncovered severe vulnerabilities that could be leveraged to manipulate votes, violate ballot privacy, and subvert the verification mechanism. These vulnerabilities do not seem to have been detected by the election authorities before we disclosed them, despite a preelection security review and despite the system having run in a live state election for five days. One vulnerability, the result of including analytics software from an insecure external server, exposed some votes to complete compromise of privacy and integrity. At least one parliamentary seat was decided by a margin much smaller than the number of votes taken while the system was vulnerable. We also found fundamental protocol flaws, including vote verification that was itself susceptible to manipulation. This incident underscores the difficulty of conducting secure elections online and carries lessons for voters, election officials, and the e-voting research community.",
"title": ""
}
] |
[
{
"docid": "add26519d60ec2a972ad550cd79129d6",
"text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.",
"title": ""
},
{
"docid": "8b00d5d458e251ef0f033d00ff03c838",
"text": "Daily behavioral rhythms in mammals are governed by the central circadian clock, located in the suprachiasmatic nucleus (SCN). The behavioral rhythms persist even in constant darkness, with a stable activity time due to coupling between two oscillators that determine the morning and evening activities. Accumulating evidence supports a prerequisite role for Ca(2+) in the robust oscillation of the SCN, yet the underlying molecular mechanism remains elusive. Here, we show that Ca(2+)/calmodulin-dependent protein kinase II (CaMKII) activity is essential for not only the cellular oscillation but also synchronization among oscillators in the SCN. A kinase-dead mutation in mouse CaMKIIα weakened the behavioral rhythmicity and elicited decoupling between the morning and evening activity rhythms, sometimes causing arrhythmicity. In the mutant SCN, the right and left nuclei showed uncoupled oscillations. Cellular and biochemical analyses revealed that Ca(2+)-calmodulin-CaMKII signaling contributes to activation of E-box-dependent gene expression through promoting dimerization of circadian locomotor output cycles kaput (CLOCK) and brain and muscle Arnt-like protein 1 (BMAL1). These results demonstrate a dual role of CaMKII as a component of cell-autonomous clockwork and as a synchronizer integrating circadian behavioral activities.",
"title": ""
},
{
"docid": "d5d0e1f1c509c208c285aead6a7c455b",
"text": "Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5–9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model (Vaswani et al., 2017) on translation by incorporating SRU into the architecture.1",
"title": ""
},
{
"docid": "4829d8c0dd21f84c3afbe6e1249d6248",
"text": "We present an action recognition and detection system from temporally untrimmed videos by combining motion and appearance features. Motion and appearance are two kinds of complementary cues for human action understanding from video. For motion features, we adopt the Fisher vector representation with improved dense trajectories due to its rich descriptive capacity. For appearance feature, we choose the deep convolutional neural network activations due to its recent success in image based tasks. With this fused feature of iDT and CNN, we train a SVM classifier for each action class in the one-vs-all scheme. We report both the recognition and detection results of our system on Thumos 14 Challenge. From the results, we see that our method rank 4 in the action recognition task and 2 in the action detection task.",
"title": ""
},
{
"docid": "55bb962b4b3ce14f8d50983835bf3f73",
"text": "This is a quantitative study on the performance of 3G mobile data offloading through WiFi networks. We recruited about 100 iPhone users from a metropolitan area and collected statistics on their WiFi connectivity during about a two and half week period in February 2010. We find that a user is in WiFi coverage for 70% of the time on average and the distributions of WiFi connection and disconnection times have a strong heavy-tail tendency with means around 2 hours and 40 minutes, respectively. Using the acquired traces, we run trace-driven simulation to measure offloading efficiency under diverse conditions e.g. traffic types, deadlines and WiFi deployment scenarios. The results indicate that if users can tolerate a two hour delay in data transfer (e.g, video and image up-loads), the network can offload 70% of the total 3G data traffic on average. We also develop a theoretical framework that permits an analytical study of the average performance of offloading. This tool is useful for network providers to obtain a rough estimate on the average performance of offloading for a given inputWiFi deployment condition.",
"title": ""
},
{
"docid": "47d278d37dfd3ab6c0b64dd94eb2de6c",
"text": "We present a novel approach for multi-object tracking which considers object detection and spacetime trajectory estimation as a coupled optimization problem. It is formulated in a hypothesis selection framework and builds upon a state-of-the-art pedestrian detector. At each time instant, it searches for the globally optimal set of spacetime trajectories which provides the best explanation for the current image and for all evidence collected so far, while satisfying the constraints that no two objects may occupy the same physical space, nor explain the same image pixels at any point in time. Successful trajectory hypotheses are fed back to guide object detection in future frames. The optimization procedure is kept efficient through incremental computation and conservative hypothesis pruning. The resulting approach can initialize automatically and track a large and varying number of persons over long periods and through complex scenes with clutter, occlusions, and large-scale background changes. Also, the global optimization framework allows our system to recover from mismatches and temporarily lost tracks. We demonstrate the feasibility of the proposed approach on several challenging video sequences.",
"title": ""
},
{
"docid": "5245cdc023c612de89f36d1573d208fe",
"text": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We argue that both components are necessary to explain the nature, use and acquisition of human knowledge, and we introduce a theory-based Bayesian framework for modeling inductive learning and reasoning as statistical inferences over structured knowledge representations.",
"title": ""
},
{
"docid": "60697a4e8dd7d13147482a0992ee1862",
"text": "Static analysis of JavaScript has proven useful for a variety of purposes, including optimization, error checking, security auditing, program refactoring, and more. We propose a technique called type refinement that can improve the precision of such static analyses for JavaScript without any discernible performance impact. Refinement is a known technique that uses the conditions in branch guards to refine the analysis information propagated along each branch path. The key insight of this paper is to recognize that JavaScript semantics include many implicit conditional checks on types, and that performing type refinement on these implicit checks provides significant benefit for analysis precision.\n To demonstrate the effectiveness of type refinement, we implement a static analysis tool for reporting potential type-errors in JavaScript programs. We provide an extensive empirical evaluation of type refinement using a benchmark suite containing a variety of JavaScript application domains, ranging from the standard performance benchmark suites (Sunspider and Octane), to open-source JavaScript applications, to machine-generated JavaScript via Emscripten. We show that type refinement can significantly improve analysis precision by up to 86% without affecting the performance of the analysis.",
"title": ""
},
{
"docid": "91dcedc72a6f5a1e6df2b66203e9f194",
"text": "Collecting 3D object data sets involves a large amount of manual work and is time consuming. Getting complete models of objects either requires a 3D scanner that covers all the surfaces of an object or one needs to rotate it to completely observe it. We present a system that incrementally builds a database of objects as a mobile agent traverses a scene. Our approach requires no prior knowledge of the shapes present in the scene. Object-like segments are extracted from a global segmentation map, which is built online using the input of segmented RGB-D images. These segments are stored in a database, matched among each other, and merged with other previously observed instances. This allows us to create and improve object models on the fly and to use these merged models to reconstruct also unobserved parts of the scene. The database contains each (potentially merged) object model only once, together with a set of poses where it was observed. We evaluate our pipeline with one public dataset, and on a newly created Google Tango dataset containing four indoor scenes with some of the objects appearing multiple times, both within and across scenes.",
"title": ""
},
{
"docid": "5c754c2fe1536a4e44800eaf7cb516e5",
"text": "This article proposes an original method for grading the colours between different images or shots. The first stage of the method is to find a one-to-one colour mapping that transfers the palette of an example target picture to the original picture. This is performed using an original and parameter free algorithm that is able to transform any N -dimensional probability density function into another one. The proposed algorithm is iterative, non-linear and has a low computational cost. Applying the colour mapping on the original picture allows reproducing the same ‘feel’ as the target picture, but can also increase the graininess of the original picture, especially if the colour dynamic of the two pictures is very different. The second stage of the method is to reduce this grain artefact through an efficient post-processing algorithm that intends to preserve the gradient field of the original picture.",
"title": ""
},
{
"docid": "98d23862436d8ff4d033cfd48692c84d",
"text": "Memory corruption vulnerabilities are the root cause of many modern attacks. Existing defense mechanisms are inadequate; in general, the software-based approaches are not efficient and the hardware-based approaches are not flexible. In this paper, we present hardware-assisted data-flow isolation, or, HDFI, a new fine-grained data isolation mechanism that is broadly applicable and very efficient. HDFI enforces isolation at the machine word granularity by virtually extending each memory unit with an additional tag that is defined by dataflow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell -- LaPadula Model. We implemented HDFI by extending the RISC-V instruction set architecture (ISA) and instantiating it on the Xilinx Zynq ZC706 evaluation board. We ran several benchmarks including the SPEC CINT 2000 benchmark suite. Evaluation results show that the performance overhead caused by our modification to the hardware is low (<; 2%). We also developed or ported several security mechanisms to leverage HDFI, including stack protection, standard library enhancement, virtual function table protection, code pointer protection, kernel data protection, and information leak prevention. Our results show that HDFI is easy to use, imposes low performance overhead, and allows us to create more elegant and more secure solutions.",
"title": ""
},
{
"docid": "ac6329671cf9bb43693870bc1f41b6e4",
"text": "We present the Siamese Continuous Bag of Words (Siamese CBOW) model, a neural network for efficient estimation of highquality sentence embeddings. Averaging the embeddings of words in a sentence has proven to be a surprisingly successful and efficient way of obtaining sentence embeddings. However, word embeddings trained with the methods currently available are not optimized for the task of sentence representation, and, thus, likely to be suboptimal. Siamese CBOW handles this problem by training word embeddings directly for the purpose of being averaged. The underlying neural network learns word embeddings by predicting, from a sentence representation, its surrounding sentences. We show the robustness of the Siamese CBOW model by evaluating it on 20 datasets stemming from a wide variety of sources.",
"title": ""
},
{
"docid": "b995fffdb04eae75b85ece3b5dd7724e",
"text": "It is necessary for potential consume to make decision based on online reviews. However, its usefulness brings forth a curse - deceptive opinion spam. The deceptive opinion spam mislead potential customers and organizations reshaping their businesses and prevent opinion-mining techniques from reaching accurate conclusions. Thus, the detection of fake reviews has become more and more fervent. In this work, we attempt to find out how to distinguish between fake reviews and non-fake reviews by using linguistic features in terms of Yelp Filter Dataset. To our surprise, the linguistic features performed well. Further, we proposed a method to extract features based on Latent Dirichlet Allocation. The result of experiment proved that the method is effective.",
"title": ""
},
{
"docid": "0df681e77b30e9143f7563b847eca5c6",
"text": "BRIDGE bot is a 158 g, 10.7 × 8.9 × 6.5 cm3, magnetic-wheeled robot designed to traverse and inspect steel bridges. Utilizing custom magnetic wheels, the robot is able to securely adhere to the bridge in any orientation. The body platform features flexible, multi-material legs that enable a variety of plane transitions as well as robot shape manipulation. The robot is equipped with a Cortex-M0 processor, inertial sensors, and a modular wireless radio. A camera is included to provide images for detection and evaluation of identified problems. The robot has been demonstrated moving through plane transitions from 45° to 340° as well as over obstacles up to 9.5 mm in height. Preliminary use of sensor feedback to improve plane transitions has also been demonstrated.",
"title": ""
},
{
"docid": "5ec8018ccc26d1772fa5498c31dc2c71",
"text": "High-content screening (HCS), which combines automated fluorescence microscopy with quantitative image analysis, allows the acquisition of unbiased multiparametric data at the single cell level. This approach has been used to address diverse biological questions and identify a plethora of quantitative phenotypes of varying complexity in numerous different model systems. Here, we describe some recent applications of HCS, ranging from the identification of genes required for specific biological processes to the characterization of genetic interactions. We review the steps involved in the design of useful biological assays and automated image analysis, and describe major challenges associated with each. Additionally, we highlight emerging technologies and future challenges, and discuss how the field of HCS might be enhanced in the future.",
"title": ""
},
{
"docid": "e041d7f54e1298d4aa55edbfcbda71ad",
"text": "Charts are common graphic representation for scientific data in technical and business papers. We present a robust system for detecting and recognizing bar charts. The system includes three stages, preprocessing, detection and recognition. The kernel algorithm in detection is newly developed Modified Probabilistic Hough Transform algorithm for parallel lines clusters detection. The main algorithms in recognition are bar pattern reconstruction and text primitives grouping in the Hough space which are also original. The Experiments show the system can also recognize slant bar charts, or even hand-drawn charts.",
"title": ""
},
{
"docid": "3abd8454fc91eb28e2911872ae8bf3af",
"text": "Graphene sheets—one-atom-thick two-dimensional layers of sp2-bonded carbon—are predicted to have a range of unusual properties. Their thermal conductivity and mechanical stiffness may rival the remarkable in-plane values for graphite (∼3,000 W m-1 K-1 and 1,060 GPa, respectively); their fracture strength should be comparable to that of carbon nanotubes for similar types of defects; and recent studies have shown that individual graphene sheets have extraordinary electronic transport properties. One possible route to harnessing these properties for applications would be to incorporate graphene sheets in a composite material. The manufacturing of such composites requires not only that graphene sheets be produced on a sufficient scale but that they also be incorporated, and homogeneously distributed, into various matrices. Graphite, inexpensive and available in large quantity, unfortunately does not readily exfoliate to yield individual graphene sheets. Here we present a general approach for the preparation of graphene-polymer composites via complete exfoliation of graphite and molecular-level dispersion of individual, chemically modified graphene sheets within polymer hosts. A polystyrene–graphene composite formed by this route exhibits a percolation threshold of ∼0.1 volume per cent for room-temperature electrical conductivity, the lowest reported value for any carbon-based composite except for those involving carbon nanotubes; at only 1 volume per cent, this composite has a conductivity of ∼0.1 S m-1, sufficient for many electrical applications. Our bottom-up chemical approach of tuning the graphene sheet properties provides a path to a broad new class of graphene-based materials and their use in a variety of applications.",
"title": ""
},
{
"docid": "ff8c3ce63b340a682e99540313be7fe7",
"text": "Detecting and identifying any phishing websites in real-time, particularly for e-banking is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining (DM) Techniques can be an effective tool in assessing and identifying phishing websites for e-banking since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the ‘fuzziness’ in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic (FL) combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria’s with a layer structure. The proposed e-banking phishing website model showed the significance importance of the phishing website two criteria’s (URL & Domain Identity) and (Security & Encryption) in the final phishing detection rate result, taking into consideration its characteristic association and relationship with each others as showed from the fuzzy data mining classification and association rule algorithms. Our phishing model also showed the insignificant trivial influence of the (Page Style & Content) criteria along with (Social Human Factor) criteria in the phishing detection final rate result.",
"title": ""
},
{
"docid": "5d3977c0a7e3e1a4129693342c6be3d3",
"text": "With the fast advances in nextgen sequencing technology, high-throughput RNA sequencing has emerged as a powerful and cost-effective way for transcriptome study. De novo assembly of transcripts provides an important solution to transcriptome analysis for organisms with no reference genome. However, there lacked understanding on how the different variables affected assembly outcomes, and there was no consensus on how to approach an optimal solution by selecting software tool and suitable strategy based on the properties of RNA-Seq data. To reveal the performance of different programs for transcriptome assembly, this work analyzed some important factors, including k-mer values, genome complexity, coverage depth, directional reads, etc. Seven program conditions, four single k-mer assemblers (SK: SOAPdenovo, ABySS, Oases and Trinity) and three multiple k-mer methods (MK: SOAPdenovo-MK, trans-ABySS and Oases-MK) were tested. While small and large k-mer values performed better for reconstructing lowly and highly expressed transcripts, respectively, MK strategy worked well for almost all ranges of expression quintiles. Among SK tools, Trinity performed well across various conditions but took the longest running time. Oases consumed the most memory whereas SOAPdenovo required the shortest runtime but worked poorly to reconstruct full-length CDS. ABySS showed some good balance between resource usage and quality of assemblies. Our work compared the performance of publicly available transcriptome assemblers, and analyzed important factors affecting de novo assembly. Some practical guidelines for transcript reconstruction from short-read RNA-Seq data were proposed. De novo assembly of C. sinensis transcriptome was greatly improved using some optimized methods.",
"title": ""
},
{
"docid": "c3e2ceebd3868dd9fff2a87fdd339dce",
"text": "Augmented Reality (AR) holds unique and promising potential to bridge between real-world activities and digital experiences, allowing users to engage their imagination and boost their creativity. We propose the concept of Augmented Creativity as employing ar on modern mobile devices to enhance real-world creative activities, support education, and open new interaction possibilities. We present six prototype applications that explore and develop Augmented Creativity in different ways, cultivating creativity through ar interactivity. Our coloring book app bridges coloring and computer-generated animation by allowing children to create their own character design in an ar setting. Our music apps provide a tangible way for children to explore different music styles and instruments in order to arrange their own version of popular songs. In the gaming domain, we show how to transform passive game interaction into active real-world movement that requires coordination and cooperation between players, and how ar can be applied to city-wide gaming concepts. We employ the concept of Augmented Creativity to authoring interactive narratives with an interactive storytelling framework. Finally, we examine how Augmented Creativity can provide a more compelling way to understand complex concepts, such as computer programming.",
"title": ""
}
] |
scidocsrr
|
ee5db77cdddb6ed31569dfe00ba5b72a
|
Question-answering in an industrial setting
|
[
{
"docid": "de43054eb774df93034ffc1976a932b7",
"text": "Recent experiments in programming natural language question-answering systems are reviewed to summarize the methods that have been developed for syntactic, semantic, and logical analysis of English strings. It is concluded that at least minimally effective techniques have been devised for answering questions from natural language subsets in small scale experimental systems and that a useful paradigm has evolved to guide research efforts in the field. Current approaches to semantic analysis and logical inference are seen to be effective beginnings but of questionable generality with respect either to subtle aspects of meaning or to applications over large subsets of English. Generalizing from current small-scale experiments to language-processing systems based on dictionaries with thousands of entries—with correspondingly large grammars and semantic systems—may entail a new order of complexity and require the invention and development of entirely different approaches to semantic analysis and question answering.",
"title": ""
}
] |
[
{
"docid": "88011e53d0ead8909cad9ea755619f60",
"text": "We present a novel approach to the task of word lemmatisation. We formalise lemmatisation as a category tagging task, by describing how a word-to-lemma transformation rule can be encoded in a single label and how a set of such labels can be inferred for a specific language. In this way, a lemmatisation system can be trained and tested using any supervised tagging model. In contrast to previous approaches, the proposed technique allows us to easily integrate relevant contextual information. We test our approach on eight languages reaching a new state-of-the-art level for the lemmatisation task.",
"title": ""
},
{
"docid": "3bf9e696755c939308efbcca363d4f49",
"text": "Robotic navigation requires that the robotic platform have an idea of its location and orientation within the environment. This localization is known as pose estimation, and has been a much researched topic. There are currently two main categories of pose estimation techniques: pose from hardware, and pose from video (PfV). Hardware pose estimation utilizes specialized hardware such as Global Positioning Systems (GPS) and Inertial Navigation Systems (INS) to estimate the position and orientation of the platform at the specified times. PfV systems use video cameras to estimate the pose of the system by calculating the inter-frame motion of the camera from features present in the images. These pose estimation systems are readily integrated, and can be used to augment and/or supplant each other according to the needs of the application. Both pose from video and hardware pose estimation have their uses, but each also has its degenerate cases in which they fail to provide reliable data. Hardware solutions can provide extremely accurate data, but are usually quite pricey and can be restrictive in their environments of operation. Pose from video solutions can be implemented with low-cost off-the-shelf components, but the accuracy of the PfV results can be degraded by noisy imagery, ambiguity in the feature matching process, and moving objects. This paper attempts to evaluate the cost/benefit comparison between pose from video and hardware pose estimation experimentally, and to provide a guide as to which systems should be used under certain scenarios.",
"title": ""
},
{
"docid": "1d03d6f7cd7ff9490dec240a36bf5f65",
"text": "Responses generated by neural conversational models tend to lack informativeness and diversity. We present a novel adversarial learning method, called Adversarial Information Maximization (AIM) model, to address these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, we explicitly optimize a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.",
"title": ""
},
{
"docid": "df2b4b46461d479ccf3d24d2958f81fd",
"text": "This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including surfaces with both varying diffuse and specular properties. Our optimization-based method builds on the observation that most objects are composed of a small number of fundamental materials by constraining each pixel to be representable by a combination of at most two such materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding accurate rerenderings under novel lighting conditions for a wide variety of objects. We demonstrate examples of interactive editing operations made possible by our approach.",
"title": ""
},
{
"docid": "d114be3bb594bb05709ecd0560c36817",
"text": "The term \"papilledema\" describes optic disc swelling resulting from increased intracranial pressure. A complete history and direct funduscopic examination of the optic nerve head and adjacent vessels are necessary to differentiate papilledema from optic disc swelling due to other conditions. Signs of optic disc swelling include elevation and blurring of the disc and its margins, venous congestion, and retinal hard exudates, splinter hemorrhages and infarcts. Patients with papilledema usually present with signs or symptoms of elevated intracranial pressure, such as headache, nausea, vomiting, diplopia, ataxia or altered consciousness. Causes of papilledema include intracranial tumors, idiopathic intracranial hypertension (pseudotumor cerebri), subarachnoid hemorrhage, subdural hematoma and intracranial inflammation. Optic disc edema may also occur from many conditions other than papilledema, including central retinal artery or vein occlusion, congenital structural anomalies and optic neuritis.",
"title": ""
},
{
"docid": "47df1bd26f99313cfcf82430cb98d442",
"text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical",
"title": ""
},
{
"docid": "b8466da90f2e75df2cc8453564ddb3e8",
"text": "Deep neural networks have recently shown impressive classification performance on a diverse set of visual tasks. When deployed in real-world (noise-prone) environments, it is equally important that these classifiers satisfy robustness guarantees: small perturbations applied to the samples should not yield significant losses to the performance of the predictor. The goal of this paper is to discuss the robustness of deep networks to a diverse set of perturbations that may affect the samples in practice, including adversarial perturbations, random noise, and geometric transformations. Our paper further discusses the recent works that build on the robustness analysis to provide geometric insights on the classifier’s decision surface, which help in developing a better understanding of deep nets. The overview finally presents recent solutions that attempt to increase the robustness of deep networks. We hope that this review paper will contribute shedding light on the open research challenges in the robustness of deep networks, and will stir interest in the analysis of their fundamental properties.",
"title": ""
},
{
"docid": "a5a1dd08d612db28770175cc578dd946",
"text": "A novel soft-robotic gripper design is presented, with three soft bending fingers and one passively adaptive palm. Each soft finger comprises two ellipse-profiled pneumatic chambers. Combined with the adaptive palm and the surface patterned feature, the soft gripper could achieve 40-N grasping force in practice, 10 times the self-weight, at a very low actuation pressure below 100 kPa. With novel soft finger design, the gripper could pick up small objects, as well as conform to large convex-shape objects with reliable contact. The fabrication process was presented in detail, involving commercial-grade three-dimensional printing and molding of silicone rubber. The fabricated actuators and gripper were tested on a dedicated platform, showing the gripper could reliably grasp objects of various shapes and sizes, even with external disturbances.",
"title": ""
},
{
"docid": "d9387322d796059173c704194a090304",
"text": "Emotional and neutral sounds rated for valence and arousal were used to investigate the influence of emotions on timing in reproduction and verbal estimation tasks with durations from 2 s to 6 s. Results revealed an effect of emotion on temporal judgment, with emotional stimuli judged to be longer than neutral ones for a similar arousal level. Within scalar expectancy theory (J. Gibbon, R. Church, & W. Meck, 1984), this suggests that emotion-induced activation generates an increase in pacemaker rate, leading to a longer perceived duration. A further exploration of self-assessed emotional dimensions showed an effect of valence and arousal. Negative sounds were judged to be longer than positive ones, indicating that negative stimuli generate a greater increase of activation. High-arousing stimuli were perceived to be shorter than low-arousing ones. Consistent with attentional models of timing, this seems to reflect a decrease of attention devoted to time, leading to a shorter perceived duration. These effects, robust across the 2 tasks, are limited to short intervals and overall suggest that both activation and attentional processes modulate the timing of emotional events.",
"title": ""
},
{
"docid": "067eca04f9a60ae7cc4b77faa478ab22",
"text": "The E. coli cytosine deaminase (CD) provides a negative selection system for suicide gene therapy as CD transfectants are eliminated following 5-fluorocytosine (5FC) treatment. Here we report a positive selection system for the CD gene using 5-fluorouracil (5FU) and cytosine in selection medium to screen for CD-positive transfectants. It is based on the relief of 5FU toxicity by uracil which is converted from cytosine via CD catalysis, as uracil competes with the toxic 5FU in subsequent pyrimidine metabolism. Hence, a retroviral vector containing the CD gene may pro- vide both positive and negative selections after gene transfer. The CD transfectants selected with the positive selection system showed susceptibility to 5FC in subsequent negative selection in vitro and in vivo. Therefore, this dual selection system is useful not only for combination therapy with transgene and CD gene, but can also act to eliminate selectively transduced cells after the transgene has furnished its effects or upon undesired conditions if 5FC is applied for negative selection in vivo.",
"title": ""
},
{
"docid": "06ab903f3de4c498e1977d7d0257f8f3",
"text": "BACKGROUND\nthe analysis of microbial communities through dna sequencing brings many challenges: the integration of different types of data with methods from ecology, genetics, phylogenetics, multivariate statistics, visualization and testing. With the increased breadth of experimental designs now being pursued, project-specific statistical analyses are often needed, and these analyses are often difficult (or impossible) for peer researchers to independently reproduce. The vast majority of the requisite tools for performing these analyses reproducibly are already implemented in R and its extensions (packages), but with limited support for high throughput microbiome census data.\n\n\nRESULTS\nHere we describe a software project, phyloseq, dedicated to the object-oriented representation and analysis of microbiome census data in R. It supports importing data from a variety of common formats, as well as many analysis techniques. These include calibration, filtering, subsetting, agglomeration, multi-table comparisons, diversity analysis, parallelized Fast UniFrac, ordination methods, and production of publication-quality graphics; all in a manner that is easy to document, share, and modify. We show how to apply functions from other R packages to phyloseq-represented data, illustrating the availability of a large number of open source analysis techniques. We discuss the use of phyloseq with tools for reproducible research, a practice common in other fields but still rare in the analysis of highly parallel microbiome census data. We have made available all of the materials necessary to completely reproduce the analysis and figures included in this article, an example of best practices for reproducible research.\n\n\nCONCLUSIONS\nThe phyloseq project for R is a new open-source software package, freely available on the web from both GitHub and Bioconductor.",
"title": ""
},
{
"docid": "b3e32f77fde76eba0adfccdc6878a0f3",
"text": "The paper describes a work in progress on humorous response generation for short-text conversation using information retrieval approach. We gathered a large collection of funny tweets and implemented three baseline retrieval models: BM25, the query term reweighting model based on syntactic parsing and named entity recognition, and the doc2vec similarity model. We evaluated these models in two ways: in situ on a popular community question answering platform and in laboratory settings. The approach proved to be promising: even simple search techniques demonstrated satisfactory performance. The collection, test questions, evaluation protocol, and assessors’ judgments create a ground for future research towards more sophisticated models.",
"title": ""
},
{
"docid": "26af6b4795e1864a63da17231651960c",
"text": "In 2020, 146,063 deaths due to pancreatic cancer are estimated to occur in Europe and the United States combined. To identify common susceptibility alleles, we performed the largest pancreatic cancer GWAS to date, including 9040 patients and 12,496 controls of European ancestry from the Pancreatic Cancer Cohort Consortium (PanScan) and the Pancreatic Cancer Case-Control Consortium (PanC4). Here, we find significant evidence of a novel association at rs78417682 (7p12/TNS3, P = 4.35 × 10−8). Replication of 10 promising signals in up to 2737 patients and 4752 controls from the PANcreatic Disease ReseArch (PANDoRA) consortium yields new genome-wide significant loci: rs13303010 at 1p36.33 (NOC2L, P = 8.36 × 10−14), rs2941471 at 8q21.11 (HNF4G, P = 6.60 × 10−10), rs4795218 at 17q12 (HNF1B, P = 1.32 × 10−8), and rs1517037 at 18q21.32 (GRP, P = 3.28 × 10−8). rs78417682 is not statistically significantly associated with pancreatic cancer in PANDoRA. Expression quantitative trait locus analysis in three independent pancreatic data sets provides molecular support of NOC2L as a pancreatic cancer susceptibility gene. Genetic variants associated with susceptibility to pancreatic cancer have been identified using genome wide association studies (GWAS). Here, the authors combine data from over 9000 patients and perform a meta-analysis to identify five novel loci linked to pancreatic cancer.",
"title": ""
},
{
"docid": "395859bbc6c78a8b19eda2ef422dc35b",
"text": "Ann Saudi Med 2006;26(4):318-320 Amelia is the complete absence of a limb, which may occur in isolation or as part of multiple congenital malformations.1-3 The condition is uncommon and very little is known with certainty about the etiology. Whatever the cause, however, it results from an event which must have occurred between the fourth and eighth week of embryogenesis.1,3 The causal factors that have been proposed include amniotic band disruption,4 maternal diabetes,5 autosomal recessive mutation6 and drugs such as thalidomide,7 alcohol8 and cocaine.9 We report a case of a female baby with a complex combination of two rare limb abnormalities: left-sided humero-radial synostosis and amelia of the other limbs.",
"title": ""
},
{
"docid": "ad637c2f2257d129fa41733c9a4ca6e5",
"text": "OBJECTIVE\nTo examine the multivariate nature of risk factors for youth violence including delinquent peer associations, exposure to domestic violence in the home, family conflict, neighborhood stress, antisocial personality traits, depression level, and exposure to television and video game violence.\n\n\nSTUDY DESIGN\nA population of 603 predominantly Hispanic children (ages 10-14 years) and their parents or guardians responded to multiple behavioral measures. Outcomes included aggression and rule-breaking behavior on the Child Behavior Checklist (CBCL), as well as violent and nonviolent criminal activity and bullying behavior.\n\n\nRESULTS\nDelinquent peer influences, antisocial personality traits, depression, and parents/guardians who use psychological abuse in intimate relationships were consistent risk factors for youth violence and aggression. Neighborhood quality, parental use of domestic violence in intimate relationships, and exposure to violent television or video games were not predictive of youth violence and aggression.\n\n\nCONCLUSION\nChildhood depression, delinquent peer association, and parental use of psychological abuse may be particularly fruitful avenues for future prevention or intervention efforts.",
"title": ""
},
{
"docid": "3df95e4b2b1bb3dc80785b25c289da92",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
},
{
"docid": "101aac77c19043a3248cf98a4b44fcbe",
"text": "Segmentation of anatomical and pathological structures in ophthalmic images is crucial for the diagnosis and study of ocular diseases. However, manual segmentation is often a time-consuming and subjective process. This paper presents an automatic approach for segmenting retinal layers in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Results show that this method accurately segments eight retinal layer boundaries in normal adult eyes more closely to an expert grader as compared to a second expert grader.",
"title": ""
},
{
"docid": "46db4cfa5ccb08da3ca884ad794dc419",
"text": "Mutation testing of Python programs raises a problem of incompetent mutants. Incompetent mutants cause execution errors due to inconsistency of types that cannot be resolved before run-time. We present a practical approach in which incompetent mutants can be generated, but the solution is transparent for a user and incompetent mutants are detected by a mutation system during test execution. Experiments with 20 traditional and object-oriented operators confirmed that the overhead can be accepted. The paper presents an experimental evaluation of the first- and higher-order mutation. Four algorithms to the 2nd and 3rd order mutant generation were applied. The impact of code coverage consideration on the process efficiency is discussed. The experiments were supported by the MutPy system for mutation testing of Python programs.",
"title": ""
},
{
"docid": "6f691fa0fb4c80f0d65f616e2db9093b",
"text": "The public demonstration of a Russian-English machine translation system in New York in January 1954 – a collaboration of IBM and Georgetown University – caused a great deal of public interest and much controversy. Although a small-scale experiment of just 250 words and six ‘grammar’ rules it raised expectations of automatic systems capable of high quality translation in the near future. This paper describes the background motivations, the linguistic methods, and the computational techniques of the system.",
"title": ""
},
{
"docid": "ed95c3c25fe1dd3097b5ca84e0569b03",
"text": "The state-of-the-art in semantic segmentation is currently represented by fully convolutional networks (FCNs). However, FCNs use large receptive fields and many pooling layers, both of which cause blurring and low spatial resolution in the deep layers. As a result FCNs tend to produce segmentations that are poorly localized around object boundaries. Prior work has attempted to address this issue in post-processing steps, for example using a color-based CRF on top of the FCN predictions. However, these approaches require additional parameters and low-level features that are difficult to tune and integrate into the original network architecture. Additionally, most CRFs use colorbased pixel affinities, which are not well suited for semantic segmentation and lead to spatially disjoint predictions. To overcome these problems, we introduce a Boundary Neural Field (BNF), which is a global energy model integrating FCN predictions with boundary cues. The boundary information is used to enhance semantic segment coherence and to improve object localization. Specifically, we first show that the convolutional filters of semantic FCNs provide good features for boundary detection. We then employ the predicted boundaries to define pairwise potentials in our energy. Finally, we show that our energy decomposes semantic segmentation into multiple binary problems, which can be relaxed for efficient global optimization. We report extensive experiments demonstrating that minimization of our global boundary-based energy yields results superior to prior globalization methods, both quantitatively as well as qualitatively.",
"title": ""
}
] |
scidocsrr
|
618d6f15c4294a7516991873efc44893
|
Field Mice: Extracting Hand Geometry from Electric Field Measurements
|
[
{
"docid": "24f141bd7a29bb8922fa010dd63181a6",
"text": "This paper reports on the development of a hand to machine interface device that provides real-time gesture, position and orientation information. The key element is a glove and the device as a whole incorporates a collection of technologies. Analog flex sensors on the glove measure finger bending. Hand position and orientation are measured either by ultrasonics, providing five degrees of freedom, or magnetic flux sensors, which provide six degrees of freedom. Piezoceramic benders provide the wearer of the glove with tactile feedback. These sensors are mounted on the light-weight glove and connected to the driving hardware via a small cable.\nApplications of the glove and its component technologies include its use in conjunction with a host computer which drives a real-time 3-dimensional model of the hand allowing the glove wearer to manipulate computer-generated objects as if they were real, interpretation of finger-spelling, evaluation of hand impairment in addition to providing an interface to a visual programming language.",
"title": ""
}
] |
[
{
"docid": "18faba65741b6871517c8050aa6f3a45",
"text": "Individuals differ in the manner they approach decision making, namely their decision-making styles. While some people typically make all decisions fast and without hesitation, others invest more effort into deciding even about small things and evaluate their decisions with much more scrutiny. The goal of the present study was to explore the relationship between decision-making styles, perfectionism and emotional processing in more detail. Specifically, 300 college students majoring in social studies and humanities completed instruments designed for assessing maximizing, decision commitment, perfectionism, as well as emotional regulation and control. The obtained results indicate that maximizing is primarily related to one dimension of perfectionism, namely the concern over mistakes and doubts, as well as emotional regulation and control. Furthermore, together with the concern over mistakes and doubts, maximizing was revealed as a significant predictor of individuals' decision commitment. The obtained findings extend previous reports regarding the association between maximizing and perfectionism and provide relevant insights into their relationship with emotional regulation and control. They also suggest a need to further explore these constructs that are, despite their complex interdependence, typically investigated in separate contexts and domains.",
"title": ""
},
{
"docid": "accad42ca98cd758fd1132e51942cba8",
"text": "The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions.",
"title": ""
},
{
"docid": "1f5c52945d83872a93749adc0e1a0909",
"text": "Turmeric, derived from the plant Curcuma longa, is a gold-colored spice commonly used in the Indian subcontinent, not only for health care but also for the preservation of food and as a yellow dye for textiles. Curcumin, which gives the yellow color to turmeric, was first isolated almost two centuries ago, and its structure as diferuloylmethane was determined in 1910. Since the time of Ayurveda (1900 B.C) numerous therapeutic activities have been assigned to turmeric for a wide variety of diseases and conditions, including those of the skin, pulmonary, and gastrointestinal systems, aches, pains, wounds, sprains, and liver disorders. Extensive research within the last half century has proven that most of these activities, once associated with turmeric, are due to curcumin. Curcumin has been shown to exhibit antioxidant, antiinflammatory, antiviral, antibacterial, antifungal, and anticancer activities and thus has a potential against various malignant diseases, diabetes, allergies, arthritis, Alzheimer’s disease, and other chronic illnesses. Curcumin can be considered an ideal “Spice for Life”. Curcumin is the most important fraction of turmeric which is responsible for its biological activity. In the present work we have investigated the qualitative and quantitative determination of curcumin in the ethanolic extract of C.longa. Qualitative estimation was carried out by thin layer chromatographic (TLC) method. The total phenolic content of the ethanolic extract of C.longa was found to be 11.24 as mg GAE/g. The simultaneous determination of the pharmacologically important active curcuminoids viz. curcumin, demethoxycurcumin and bisdemethoxycurcumin in Curcuma longa was carried out by spectrophotometric and HPLC techniques. HPLC separation was performed on a Cyber Lab C-18 column (250 x 4.0 mm, 5μ) using acetonitrile and 0.1 % orthophosphoric acid solution in water in the ratio 60 : 40 (v/v) at flow rate of 0.5 mL/min. Detection of curcuminoids were performed at 425 nm.",
"title": ""
},
{
"docid": "6d8a413767d9fab8ef3ca22daaa0e921",
"text": "Query-oriented summarization addresses the problem of information overload and help people get the main ideas within a short time. Summaries are composed by sentences. So, the basic idea of composing a salient summary is to construct quality sentences both for user specific queries and multiple documents. Sentence embedding has been shown effective in summarization tasks. However, these methods lack of the latent topic structure of contents. Hence, the summary lies only on vector space can hardly capture multi-topical content. In this paper, our proposed model incorporates the topical aspects and continuous vector representations, which jointly learns semantic rich representations encoded by vectors. Then, leveraged by topic filtering and embedding ranking model, the summarization can select desirable salient sentences. Experiments demonstrate outstanding performance of our proposed model from the perspectives of prominent topics and semantic coherence.",
"title": ""
},
{
"docid": "88d9c077f588e9e02453bd0ea40cfcae",
"text": "This study explored the prevalence of and motivations behind 'drunkorexia' – restricting food intake prior to drinking alcohol. For both male and female university students (N = 3409), intentionally changing eating behaviour prior to drinking alcohol was common practice (46%). Analyses performed on a targeted sample of women (n = 226) revealed that food restriction prior to alcohol use was associated with greater symptomology than eating more food. Those who restrict eating prior to drinking to avoid weight gain scored higher on measures of disordered eating, whereas those who restrict to get intoxicated faster scored higher on measures of alcohol abuse.",
"title": ""
},
{
"docid": "e5691e6bb32f06a34fab7b692539d933",
"text": "Öz Supplier evaluation and selection includes both qualitative and quantitative criteria and it is considered as a complex Multi Criteria Decision Making (MCDM) problem. Uncertainty and impreciseness of data is an integral part of decision making process for a real life application. The fuzzy set theory allows making decisions under uncertain environment. In this paper, a trapezoidal type 2 fuzzy multicriteria decision making methods based on TOPSIS is proposed to select convenient supplier under vague information. The proposed method is applied to the supplier selection process of a textile firm in Turkey. In addition, the same problem is solved with type 1 fuzzy TOPSIS to confirm the findings of type 2 fuzzy TOPSIS. A sensitivity analysis is conducted to observe how the decision changes under different scenarios. Results show that the presented type 2 fuzzy TOPSIS method is more appropriate and effective to handle the supplier selection in uncertain environment. Tedarikçi değerlendirme ve seçimi, nitel ve nicel çok sayıda faktörün değerlendirilmesini gerektiren karmaşık birçok kriterli karar verme problemi olarak görülmektedir. Gerçek hayatta, belirsizlikler ve muğlaklık bir karar verme sürecinin ayrılmaz bir parçası olarak karşımıza çıkmaktadır. Bulanık küme teorisi, belirsizlik durumunda karar vermemize imkân sağlayan metotlardan bir tanesidir. Bu çalışmada, ikizkenar yamuk tip 2 bulanık TOPSIS yöntemi kısaca tanıtılmıştır. Tanıtılan yöntem, Türkiye’de bir tekstil firmasının tedarikçi seçimi problemine uygulanmıştır. Ayrıca, tip 2 bulanık TOPSIS yönteminin sonuçlarını desteklemek için aynı problem tip 1 bulanık TOPSIS ile de çözülmüştür. Duyarlılık analizi yapılarak önerilen çözümler farklı senaryolar altında incelenmiştir. Duyarlılık analizi sonuçlarına göre tip 2 bulanık TOPSIS daha efektif ve uygun çözümler üretmektedir.",
"title": ""
},
{
"docid": "abb54a0c155805e7be2602265f78ae79",
"text": "In this paper we sketch out a computational theory of spatial cognition motivated by navigational behaviours, ecological requirements, and neural mechanisms as identified in animals and man. Spatial cognition is considered in the context of a cognitive agent built around the action-perception cycle. Besides sensors and effectors, the agent comprises multiple memory structures including a working memory and a longterm memory stage. Spatial longterm memory is modeled along the graph approach, treating recognizable places or poses as nodes and navigational actions as links. Models of working memory and its interaction with reference memory are discussed. The model provides an overall framework of spatial cognition which can be adapted to model different levels of behavioural complexity as well as interactions between working and longterm memory. A number of design questions for building cognitive robots are derived from comparison with biological systems and discussed in the paper.",
"title": ""
},
{
"docid": "85fe68b957a8daa69235ef65d92b1990",
"text": "Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacyoriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level CHRF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.",
"title": ""
},
{
"docid": "bb0f1e1384d91412fe3f0f0a51e91b8a",
"text": "This paper reports on an integrated navigation algorithm for the visual simultaneous localization and mapping (SLAM) robotic area coverage problem. In the robotic area coverage problem, the goal is to explore and map a given target area within a reasonable amount of time. This goal necessitates the use of minimally redundant overlap trajectories for coverage efficiency; however, visual SLAM’s navigation estimate will inevitably drift over time in the absence of loop-closures. Therefore, efficient area coverage and good SLAM navigation performance represent competing objectives. To solve this decision-making problem, we introduce perception-driven navigation, an integrated navigation algorithm that automatically balances between exploration and revisitation using a reward framework. This framework accounts for SLAM localization uncertainty, area coverage performance, and the identification of good candidate regions in the environment for visual perception. Results are shown for both a hybrid simulation and real-world demonstration of a visual SLAM system for autonomous underwater ship hull inspection.",
"title": ""
},
{
"docid": "83bec63fb2932aec5840a9323cc290b4",
"text": "This paper extends fully-convolutional neural networks (FCN) for the clothing parsing problem. Clothing parsing requires higher-level knowledge on clothing semantics and contextual cues to disambiguate fine-grained categories. We extend FCN architecture with a side-branch network which we refer outfit encoder to predict a consistent set of clothing labels to encourage combinatorial preference, and with conditional random field (CRF) to explicitly consider coherent label assignment to the given image. The empirical results using Fashionista and CFPD datasets show that our model achieves state-of-the-art performance in clothing parsing, without additional supervision during training. We also study the qualitative influence of annotation on the current clothing parsing benchmarks, with our Web-based tool for multi-scale pixel-wise annotation and manual refinement effort to the Fashionista dataset. Finally, we show that the image representation of the outfit encoder is useful for dress-up image retrieval application.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "1cdcb24b61926f37037fbb43e6d379b7",
"text": "The Internet has undergone dramatic changes in the past 2 decades and now forms a global communication platform that billions of users rely on for their daily activities. While this transformation has brought tremendous benefits to society, it has also created new threats to online privacy, such as omnipotent governmental surveillance. As a result, public interest in systems for anonymous communication has drastically increased. In this work, we survey previous research on designing, developing, and deploying systems for anonymous communication. Our taxonomy and comparative assessment provide important insights about the differences between the existing classes of anonymous communication protocols.",
"title": ""
},
{
"docid": "681f36fde6ec060baa76a6722a62ccbc",
"text": "This study determined if any of six endodontic solutions would have a softening effect on resorcinol-formalin paste in extracted teeth, and if there were any differences in the solvent action between these solutions. Forty-nine single-rooted extracted teeth were decoronated 2 mm coronal to the CEJ, and the roots sectioned apically to a standard length of 15 mm. Canals were prepared to a 12 mm WL and a uniform size with a #7 Parapost drill. Teeth were then mounted in a cylinder ring with acrylic. The resorcinol-formalin mixture was placed into the canals and was allowed to set for 60 days in a humidor. The solutions tested were 0.9% sodium chloride, 5.25% sodium hypochlorite, chloroform, Endosolv R (Endosolv R), 3% hydrogen peroxide, and 70% isopropyl alcohol. Seven samples per solution were tested and seven samples using water served as controls. One drop of the solution was placed over the set mixture in the canal, and the depth of penetration of a 1.5-mm probe was measured at 2, 5, 10, and 20 min using a dial micrometer gauge. A repeated-measures ANOVA showed a difference in penetration between the solutions at 10 min (p = 0.04) and at 20 min (p = 0.0004). At 20 min, Endosolv R, had significantly greater penetration than 5.25% sodium hypochlorite (p = 0.0033) and chloroform (p = 0.0018); however, it was not significantly better than the control (p = 0.0812). Although Endosolv R, had statistically superior probe penetration at 20 min, the softening effect could not be detected clinically at this time.",
"title": ""
},
{
"docid": "d2b545b4f9c0e7323760632c65206480",
"text": "This brief presents a quantitative analysis of the operating characteristics of three-phase diode bridge rectifiers with ac-side reactance and constant-voltage loads. We focus on the case where the ac-side currents vary continuously (continuous ac-side conduction mode). This operating mode is of particular importance in alternators and generators, for example. Simple approximate expressions are derived for the line and output current characteristics as well as the input power factor. Expressions describing the necessary operating conditions for continuous ac-side conduction are also developed. The derived analytical expressions are applied to practical examples and both simulations and experimental results are utilized to validate the analytical results. It is shown that the derived expressions are far more accurate than calculations based on traditional constant-current models.",
"title": ""
},
{
"docid": "ec641ace6df07156891f2bf40ea5d072",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "91eecde9d0e3b67d7af0194782923ead",
"text": "The burden of entry into mobile crowdsensing (MCS) is prohibitively high for human-subject researchers who lack a technical orientation. As a result, the benefits of MCS remain beyond the reach of research communities (e.g., psychologists) whose expertise in the study of human behavior might advance applications and understanding of MCS systems. This paper presents Sensus, a new MCS system for human-subject studies that bridges the gap between human-subject researchers and MCS methods. Sensus alleviates technical burdens with on-device, GUI-based design of sensing plans, simple and efficient distribution of sensing plans to study participants, and uniform participant experience across iOS and Android devices. Sensing plans support many hardware and software sensors, automatic deployment of sensor-triggered surveys, and double-blind assignment of participants within randomized controlled trials. Sensus offers these features to study designers without requiring knowledge of markup and programming languages. We demonstrate the feasibility of using Sensus within two human-subject studies, one in psychology and one in engineering. Feedback from non-technical users indicates that Sensus is an effective and low-burden system for MCS-based data collection and analysis.",
"title": ""
},
{
"docid": "44d4114280e3ab9f6bfa0f0b347114b7",
"text": "Dozens of Electronic Control Units (ECUs) can be found on modern vehicles for safety and driving assistance. These ECUs also introduce new security vulnerabilities as recent attacks have been reported by plugging the in-vehicle system or through wireless access. In this paper, we focus on the security of the Controller Area Network (CAN), which is a standard for communication among ECUs. CAN bus by design does not have sufficient security features to protect it from insider or outsider attacks. Intrusion detection system (IDS) is one of the most effective ways to enhance vehicle security on the insecure CAN bus protocol. We propose a new IDS based on the entropy of the identifier bits in CAN messages. The key observation is that all the known CAN message injection attacks need to alter the CAN ID bits and analyzing the entropy of such bits can be an effective way to detect those attacks. We collected real CAN messages from a vehicle (2016 Ford Fusion) and performed simulated message injection attacks. The experimental results showed that our entropy based IDS can successfully detect all the injection attacks without disrupting the communication on CAN.",
"title": ""
},
{
"docid": "b8e90e97e8522ed45788025ca97ec720",
"text": "The use of Business Intelligence (BI) and Business Analytics for supporting decision-making is widespread in the world of praxis and their relevance for Management Accounting (MA) has been outlined in non-academic literature. Nonetheless, current research on Business Intelligence systems’ implications for the Management Accounting System is still limited. The purpose of this study is to contribute to understanding how BI system implementation and use affect MA techniques and Management Accountants’ role. An explorative field study, which involved BI consultants from Italian consulting companies, was carried out. We used the qualitative field study method since it permits dealing with complex “how” questions and, at the same time, taking into consideration multiple sites thus offering a comprehensive picture of the phenomenon. We found that BI implementation can affect Management Accountants’ expertise and can bring about not only incremental changes in existing Management Accounting techniques but also more relevant ones, by supporting the introduction of new and advanced MA techniques. By identifying changes in the Management Accounting System as well as factors which can prevent or favor a virtuous relationship between BI and Management Accounting Systems this research can be useful both for consultants and for client-companies in effectively managing BI projects.",
"title": ""
},
{
"docid": "9f46ec6dad4a1ebeeabb38f77ad4b1d7",
"text": "This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubic-patch-based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of “many” normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing the remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep auto-encoder and the CNN into multiple sub-stages, which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect “simple” normal patches, such as background patches and more complex normal patches, are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.",
"title": ""
},
{
"docid": "9b4ffbbcd97e94524d2598cd862a400a",
"text": "Head pose monitoring is an important task for driver assistance systems, since it is a key indicator for human attention and behavior. However, current head pose datasets either lack complexity or do not adequately represent the conditions that occur while driving. Therefore, we introduce DriveAHead, a novel dataset designed to develop and evaluate head pose monitoring algorithms in real driving conditions. We provide frame-by-frame head pose labels obtained from a motion-capture system, as well as annotations about occlusions of the driver's face. To the best of our knowledge, DriveAHead is the largest publicly available driver head pose dataset, and also the only one that provides 2D and 3D data aligned at the pixel level using the Kinect v2. Existing performance metrics are based on the mean error without any consideration of the bias towards one position or another. Here, we suggest a new performance metric, named Balanced Mean Angular Error, that addresses the bias towards the forward looking position existing in driving datasets. Finally, we present the Head Pose Network, a deep learning model that achieves better performance than current state-of-the-art algorithms, and we analyze its performance when using our dataset.",
"title": ""
}
] |
scidocsrr
|
987e0266c73109191ccbacf73747a6b3
|
Performance optimization of Hadoop cluster using linux services
|
[
{
"docid": "b104337e30aa30db3dadc4e254ed2ad4",
"text": "We live in on-demand, on-command Digital universe with data prolifering by Institutions, Individuals and Machines at a very high rate. This data is categories as \"Big Data\" due to its sheer Volume, Variety and Velocity. Most of this data is unstructured, quasi structured or semi structured and it is heterogeneous in nature. The volume and the heterogeneity of data with the speed it is generated, makes it difficult for the present computing infrastructure to manage Big Data. Traditional data management, warehousing and analysis systems fall short of tools to analyze this data. Due to its specific nature of Big Data, it is stored in distributed file system architectures. Hadoop and HDFS by Apache is widely used for storing and managing Big Data. Analyzing Big Data is a challenging task as it involves large distributed file systems which should be fault tolerant, flexible and scalable. Map Reduce is widely been used for the efficient analysis of Big Data. Traditional DBMS techniques like Joins and Indexing and other techniques like graph search is used for classification and clustering of Big Data. These techniques are being adopted to be used in Map Reduce. In this paper we suggest various methods for catering to the problems in hand through Map Reduce framework over Hadoop Distributed File System (HDFS). Map Reduce is a Minimization technique which makes use of file indexing with mapping, sorting, shuffling and finally reducing. Map Reduce techniques have been studied in this paper which is implemented for Big Data analysis using HDFS.",
"title": ""
}
] |
[
{
"docid": "5ea560095b752ca8e7fb6672f4092980",
"text": "Access control is a security aspect whose requirements evolve with technology advances and, at the same time, contemporary social contexts. Multitudes of access control models grow out of their respective application domains such as healthcare and collaborative enterprises; and even then, further administering means, human factor considerations, and infringement management are required to effectively deploy the model in the particular usage environment. This paper presents a survey of access control mechanisms along with their deployment issues and solutions available today. We aim to give a comprehensive big picture as well as pragmatic deployment details to guide in understanding, setting up and enforcing access control in its real world application.",
"title": ""
},
{
"docid": "bf08bc98eb9ef7a18163fc310b10bcf6",
"text": "An ultra-low voltage, low power, low line sensitivity MOSFET-only sub-threshold voltage reference with no amplifiers is presented. The low sensitivity is realized by the difference between two complementary currents and second-order compensation improves the temperature stability. The bulk-driven technique is used and most of the transistors work in the sub-threshold region, which allow a remarkable reduction in the minimum supply voltage and power consumption. Moreover, a trimming circuit is adopted to compensate the process-related reference voltage variation while the line sensitivity is not affected. The proposed voltage reference has been fabricated in the 0.18 μm 1.8 V CMOS process. The measurement results show that the reference could operate on a 0.45 V supply voltage. For supply voltages ranging from 0.45 to 1.8 V the power consumption is 15.6 nW, and the average temperature coefficient is 59.4 ppm/°C across a temperature range of -40 to 85 °C and a mean line sensitivity of 0.033%. The power supply rejection ratio measured at 100 Hz is -50.3 dB. In addition, the chip area is 0.013 mm2.",
"title": ""
},
{
"docid": "443a4fe9e7484a18aa53a4b142d93956",
"text": "BACKGROUND AND PURPOSE\nFrequency and duration of static stretching have not been extensively examined. Additionally, the effect of multiple stretches per day has not been evaluated. The purpose of this study was to determine the optimal time and frequency of static stretching to increase flexibility of the hamstring muscles, as measured by knee extension range of motion (ROM).\n\n\nSUBJECTS\nNinety-three subjects (61 men, 32 women) ranging in age from 21 to 39 years and who had limited hamstring muscle flexibility were randomly assigned to one of five groups. The four stretching groups stretched 5 days per week for 6 weeks. The fifth group, which served as a control, did not stretch.\n\n\nMETHODS\nData were analyzed with a 5 x 2 (group x test) two-way analysis of variance for repeated measures on one variable (test).\n\n\nRESULTS\nThe change in flexibility appeared to be dependent on the duration and frequency of stretching. Further statistical analysis of the data indicated that the groups that stretched had more ROM than did the control group, but no differences were found among the stretching groups.\n\n\nCONCLUSION AND DISCUSSION\nThe results of this study suggest that a 30-second duration is an effective amount of time to sustain a hamstring muscle stretch in order to increase ROM. No increase in flexibility occurred when the duration of stretching was increased from 30 to 60 seconds or when the frequency of stretching was increased from one to three times per day.",
"title": ""
},
{
"docid": "8709706ffafdadfc2fb9210794dfa782",
"text": "The increasing availability and affordability of wireless building and home automation networks has increased interest in residential and commercial building energy management. This interest has been coupled with an increased awareness of the environmental impact of energy generation and usage. Residential appliances and equipment account for 30% of all energy consumption in OECD countries and indirectly contribute to 12% of energy generation related carbon dioxide (CO2) emissions (International Energy Agency, 2003). The International Energy Association also predicts that electricity usage for residential appliances would grow by 12% between 2000 and 2010, eventually reaching 25% by 2020. These figures highlight the importance of managing energy use in order to improve stewardship of the environment. They also hint at the potential gains that are available through smart consumption strategies targeted at residential and commercial buildings. The challenge is how to achieve this objective without negatively impacting people’s standard of living or their productivity. The three primary purposes of building energy management are the reduction/management of building energy use; the reduction of electricity bills while increasing occupant comfort and productivity; and the improvement of environmental stewardship without adversely affecting standards of living. Building energy management systems provide a centralized platform for managing building energy usage. They detect and eliminate waste, and enable the efficient use electricity resources. The use of widely dispersed sensors enables the monitoring of ambient temperature, lighting, room occupancy and other inputs required for efficient management of climate control (heating, ventilation and air conditioning), security and lighting systems. Lighting and HVAC account for 50% of commercial and 40% of residential building electricity expenditure respectively, indicating that efficiency improvements in these two areas can significantly reduce energy expenditure. These savings can be made through two avenues: the first is through the use of energy-efficient lighting and HVAC systems; and the second is through the deployment of energy management systems which utilize real time price information to schedule loads to minimize energy bills. The latter scheme requires an intelligent power grid or smart grid which can provide bidirectional data flows between customers and utility companies. The smart grid is characterized by the incorporation of intelligenceand bidirectional flows of information and electricity throughout the power grid. These enhancements promise to revolutionize the grid by enabling customers to not only consume but also supply power.",
"title": ""
},
{
"docid": "80fd067dd6cf2fe85ade3c632e82c04c",
"text": "0957-4174/$ see front matter 2009 Elsevier Ltd. A doi:10.1016/j.eswa.2009.03.046 * Corresponding author. Tel.: +98 09126121921. E-mail address: [email protected] (M. Sha Recommender systems are powerful tools that allow companies to present personalized offers to their customers and defined as a system which recommends an appropriate product or service after learning the customers’ preferences and desires. Extracting users’ preferences through their buying behavior and history of purchased products is the most important element of such systems. Due to users’ unlimited and unpredictable desires, identifying their preferences is very complicated process. In most researches, less attention has been paid to user’s preferences varieties in different product categories. This may decrease quality of recommended items. In this paper, we introduced a technique of recommendation in the context of online retail store which extracts user preferences in each product category separately and provides more personalized recommendations through employing product taxonomy, attributes of product categories, web usage mining and combination of two well-known filtering methods: collaborative and content-based filtering. Experimental results show that proposed technique improves quality, as compared to similar approaches. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c7a96129484bbedd063a0b322d9ae3d3",
"text": "BACKGROUND\nNon-invasive detection of aneuploidies in a fetal genome through analysis of cell-free DNA circulating in the maternal plasma is becoming a routine clinical test. Such tests, which rely on analyzing the read coverage or the allelic ratios at single-nucleotide polymorphism (SNP) loci, are not sensitive enough for smaller sub-chromosomal abnormalities due to sequencing biases and paucity of SNPs in a genome.\n\n\nRESULTS\nWe have developed an alternative framework for identifying sub-chromosomal copy number variations in a fetal genome. This framework relies on the size distribution of fragments in a sample, as fetal-origin fragments tend to be smaller than those of maternal origin. By analyzing the local distribution of the cell-free DNA fragment sizes in each region, our method allows for the identification of sub-megabase CNVs, even in the absence of SNP positions. To evaluate the accuracy of our method, we used a plasma sample with the fetal fraction of 13%, down-sampled it to samples with coverage of 10X-40X and simulated samples with CNVs based on it. Our method had a perfect accuracy (both specificity and sensitivity) for detecting 5 Mb CNVs, and after reducing the fetal fraction (to 11%, 9% and 7%), it could correctly identify 98.82-100% of the 5 Mb CNVs and had a true-negative rate of 95.29-99.76%.\n\n\nAVAILABILITY AND IMPLEMENTATION\nOur source code is available on GitHub at https://github.com/compbio-UofT/FSDA CONTACT: : [email protected].",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "88302ac0c35e991b9db407f268fdb064",
"text": "We propose a novel memory architecture for in-memory computation called McDRAM, where DRAM dies are equipped with a large number of multiply accumulate (MAC) units to perform matrix computation for neural networks. By exploiting high internal memory bandwidth and reducing off-chip memory accesses, McDRAM realizes both low latency and energy efficient computation. In our experiments, we obtained the chip layout based on the state-of-the-art memory, LPDDR4 where McDRAM is equipped with 2048 MACs in a single chip package with a small area overhead (4.7%). Compared with the state-of-the-art accelerator, TPU and the power-efficient GPU, Nvidia P4, McDRAM offers <inline-formula> <tex-math notation=\"LaTeX\">$9.5{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$14.4{\\times }$ </tex-math></inline-formula> speedup, respectively, in the case that the large-scale MLPs and RNNs adopt the batch size of 1. McDRAM also gives <inline-formula> <tex-math notation=\"LaTeX\">$2.1{\\times }$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$3.7{\\times }$ </tex-math></inline-formula> better computational efficiency in TOPS/W than TPU and P4, respectively, for the large batches.",
"title": ""
},
{
"docid": "3a5dacb4b43f663539108ed1524f0c06",
"text": "This paper describes the design of CMOS receiver electronics for monolithic integration with capacitive micromachined ultrasonic transducer (CMUT) arrays for high-frequency intravascular ultrasound imaging. A custom 8-inch (20-cm) wafer is fabricated in a 0.35-μm two-poly, four-metal CMOS process and then CMUT arrays are built on top of the application specific integrated circuits (ASICs) on the wafer. We discuss advantages of the single-chip CMUT-on-CMOS approach in terms of receive sensitivity and SNR. Low-noise and high-gain design of a transimpedance amplifier (TIA) optimized for a forward-looking volumetric-imaging CMUT array element is discussed as a challenging design example. Amplifier gain, bandwidth, dynamic range, and power consumption trade-offs are discussed in detail. With minimized parasitics provided by the CMUT-on-CMOS approach, the optimized TIA design achieves a 90 fA/√Hz input-referred current noise, which is less than the thermal-mechanical noise of the CMUT element. We show successful system operation with a pulseecho measurement. Transducer-noise-dominated detection in immersion is also demonstrated through output noise spectrum measurement of the integrated system at different CMUT bias voltages. A noise figure of 1.8 dB is obtained in the designed CMUT bandwidth of 10 to 20 MHz.",
"title": ""
},
{
"docid": "59a69e5d33d650ef3e4afc053a98abe6",
"text": "Three-dimensional television (3D-TV) is the next major revolution in television. A successful rollout of 3D-TV will require a backward-compatible transmission/distribution system, inexpensive 3D displays, and an adequate supply of high-quality 3D program material. With respect to the last factor, the conversion of 2D images/videos to 3D will play an important role. This paper provides an overview of automatic 2D-to-3D video conversion with a specific look at a number of approaches for both the extraction of depth information from monoscopic images and the generation of stereoscopic images. Some challenging issues for the success of automatic 2D-to-3D video conversion are pointed out as possible research topics for the future.",
"title": ""
},
{
"docid": "8f360c907e197beb5e6fc82b081c908f",
"text": "This paper describes a 3D object-space paint program. This program allows the user to directly manipulate the parameters used to shade the surface of the 3D shape by applying pigment to its surface. The pigment has all the properties normally associated with material shading models. This includes, but is not limited to, the diffuse color, the specular color, and the surface roughness. The pigment also can have thickness, which is modeled by simultaneously creating a bump map attached to the shape. The output of the paint program is a 3D model with associated texture maps. This information can be used with any rendering program with texture mapping capabilities. Almost all traditional techniques of 2D computer image painting have analogues in 3D object painting, but there are also many new techniques unique to 3D. One example is the use of solid textures to pattern the surface.",
"title": ""
},
{
"docid": "b723616272d078bdbaaae1cf650ace20",
"text": "Most of industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. In this paper is proposed an accelerometer-based system to control an industrial robot using two low-cost and small 3-axis wireless accelerometers. These accelerometers are attached to the human arms, capturing its behavior (gestures and postures). An Artificial Neural Network (ANN) trained with a back-propagation algorithm was used to recognize arm gestures and postures, which then will be used as input in the control of the robot. The aim is that the robot starts the movement almost at the same time as the user starts to perform a gesture or posture (low response time). The results show that the system allows the control of an industrial robot in an intuitive way. However, the achieved recognition rate of gestures and postures (92%) should be improved in future, keeping the compromise with the system response time (160 milliseconds). Finally, the results of some tests performed with an industrial robot are presented and discussed.",
"title": ""
},
{
"docid": "d469d31d26d8bc07b9d8dfa8ce277e47",
"text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.",
"title": ""
},
{
"docid": "51c0cdb22056a3dc3f2f9b95811ca1ca",
"text": "Technology plays the major role in healthcare not only for sensory devices but also in communication, recording and display device. It is very important to monitor various medical parameters and post operational days. Hence the latest trend in Healthcare communication method using IOT is adapted. Internet of things serves as a catalyst for the healthcare and plays prominent role in wide range of healthcare applications. In this project the PIC18F46K22 microcontroller is used as a gateway to communicate to the various sensors such as temperature sensor and pulse oximeter sensor. The microcontroller picks up the sensor data and sends it to the network through Wi-Fi and hence provides real time monitoring of the health care parameters for doctors. The data can be accessed anytime by the doctor. The controller is also connected with buzzer to alert the caretaker about variation in sensor output. But the major issue in remote patient monitoring system is that the data as to be securely transmitted to the destination end and provision is made to allow only authorized user to access the data. The security issue is been addressed by transmitting the data through the password protected Wi-Fi module ESP8266 which will be encrypted by standard AES128 and the users/doctor can access the data by logging to the html webpage. At the time of extremity situation alert message is sent to the doctor through GSM module connected to the controller. Hence quick provisional medication can be easily done by this system. This system is efficient with low power consumption capability, easy setup, high performance and time to time response.",
"title": ""
},
{
"docid": "d07d6fe33b01fbfb21ba5adc76ec786f",
"text": "Dunaliella salina (Dunal) Teod, a unicellular eukaryotic green alga, is a highly salt-tolerant organism. To identify novel genes with potential roles in salinity tolerance, a salt stress-induced D. salina cDNA library was screened based on the expression in Haematococcus pluvialis, an alga also from Volvocales but one that is hypersensitive to salt. Five novel salt-tolerant clones were obtained from the library. Among them, Ds-26-16 and Ds-A3-3 contained the same open reading frame (ORF) and encoded a 6.1 kDa protein. Transgenic tobacco overexpressing Ds-26-16 and Ds-A3-3 exhibited increased leaf area, stem height, root length, total chlorophyll, and glucose content, but decreased proline content, peroxidase activity, and ascorbate content, and enhanced transcript level of Na+/H+ antiporter salt overly sensitive 1 gene (NtSOS1) expression, compared to those in the control plants under salt condition, indicating that Ds-26-16 enhanced the salt tolerance of tobacco plants. The transcript of Ds-26-16 in D. salina was upregulated in response to salt stress. The expression of Ds-26-16 in Escherichia coli showed that the ORF contained the functional region and changed the protein(s) expression profile. A mass spectrometry assay suggested that the most abundant and smallest protein that changed is possibly a DNA-binding protein or Cold shock-like protein. Subcellular localization analysis revealed that Ds-26-16 was located in the nuclei of onion epidermal cells or nucleoid of E. coli cells. In addition, the possible use of shoots regenerated from leaf discs to quantify the salt tolerance of the transgene at the initial stage of tobacco transformation was also discussed.",
"title": ""
},
{
"docid": "ca32fb4df9c03951e14ce9e06f7d90a0",
"text": "Future wireless local area networks (WLANs) are expected to serve thousands of users in diverse environments. To address the new challenges that WLANs will face, and to overcome the limitations that previous IEEE standards introduced, a new IEEE 802.11 amendment is under development. IEEE 802.11ax aims to enhance spectrum efficiency in a dense deployment; hence system throughput improves. Dynamic Sensitivity Control (DSC) and BSS Color are the main schemes under consideration in IEEE 802.11ax for improving spectrum efficiency In this paper, we evaluate DSC and BSS Color schemes when physical layer capture (PLC) is modelled. PLC refers to the case that a receiver successfully decodes the stronger frame when collision occurs. It is shown, that PLC could potentially lead to fairness issues and higher throughput in specific cases. We study PLC in a small and large scale scenario, and show that PLC could also improve fairness in specific scenarios.",
"title": ""
},
{
"docid": "0acf9ef6e025805a76279d1c6c6c55e7",
"text": "Android mobile devices are enjoying a lion's market share in smartphones and mobile devices. This also attracts malware writers to target the Android platform. Recently, we have discovered a new Android malware distribution channel: releasing malicious firmwares with pre-installed malware to the wild. This poses significant risk since users of mobile devices cannot change the content of the malicious firmwares. Furthermore, pre-installed applications have \" more permissions\" (i.e., silent installation) than other legitimate mobile apps, so they can download more malware or access users' confidential information. To understand and address this new form of malware distribution channel, we design and implement \"DroidRay\": a security evaluation system for customized Android firmwares. DroidRay uses both static and dynamic analyses to evaluate the firmware security on both the application and system levels. To understand the impact of this new malware distribution channel, we analyze 250 Android firmwares and 24,009 pre-installed applications. We reveal how the malicious firmware and pre-installed malware are injected, and discovered 1,947 (8.1%) pre-installed applications have signature vulnerability and 19 (7.6%) firmwares contain pre-installed malware. In addition, 142 (56.8%) firmwares have the default signature vulnerability, five (2.0%) firmwares contain malicious hosts file, at most 40 (16.0%) firmwares have the native level privilege escalation vulnerability and at least 249 (99.6%) firmwares have the Java level privilege escalation vulnerability. Lastly, we investigate a real-world case of a pre-installed zero-day malware known as CEPlugnew, which involves 348,018 infected Android smartphones, and we show its degree and geographical penetration. This shows the significance of this new malware distribution channel, and DroidRay is an effective tool to combat this new form of malware spreading.",
"title": ""
},
{
"docid": "a31287791b12f55adebacbb93a03c8bc",
"text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.",
"title": ""
},
{
"docid": "6e46fd2a8370bc42d245ca128c9f537b",
"text": "A literature review of the associations between involvement in bullying and depression is presented. Many studies have demonstrated a concurrent association between involvement in bullying and depression in adolescent population samples. Not only victims but also bullies display increased risk of depression, although not all studies have confirmed this for the bullies. Retrospective studies among adults support the notion that victimization is followed by depression. Prospective follow-up studies have suggested both that victimization from bullying may be a risk factor for depression and that depression may predispose adolescents to bullying. Research among clinically referred adolescents is scarce but suggests that correlations between victimization from bullying and depression are likely to be similar in clinical and population samples. Adolescents who bully present with elevated numbers of psychiatric symptoms and psychiatric and social welfare treatment contacts.",
"title": ""
},
{
"docid": "d00f7e5085d5aa9d8ac38f2abc7b5237",
"text": "Data-driven machine learning, in particular deep learning, is improving state-ofthe-art in many healthcare prediction tasks. A current standard protocol is to collect patient data to build, evaluate, and deploy machine learning algorithms for specific age groups (say source domain), which, if not properly trained, can perform poorly on data from other age groups (target domains). In this paper, we address the question of whether it is possible to adapt machine learning models built for one age group to also perform well on other age groups. Additionally, healthcare time series data is also challenging in that it is usually longitudinal and episodic with the potential of having complex temporal relationships. We address these problems with our proposed adversarially trained Variational Adversarial Deep Domain Adaptation (VADDA) model built atop a variational recurrent neural network, which has been shown to be capable of capturing complex temporal latent relationships. We assume and empirically justify that patient data from different age groups can be treated as being similar but different enough to be classified as coming from different domains, requiring the use of domain-adaptive approaches. Through experiments on the MIMIC-III dataset we demonstrate that our model outperforms current state-of-the-art domain adaptation approaches, being (as far as we know) the first to accomplish this for healthcare time-series data.",
"title": ""
}
] |
scidocsrr
|
f27bbb003e3c34e758aa37aaeea9f438
|
Word Embeddings via Tensor Factorization
|
[
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] |
[
{
"docid": "1c94a04fdeb39ba00357e4dcc87d3862",
"text": "Automatic segmentation of speech is an important problem that is useful in speech recognition, synthesis and coding. We explore in this paper, the robust parameter set, weighting function and distance measure for reliable segmentation of noisy speech. It is found that the MFCC parameters, successful in speech recognition. holds the best promise for robust segmentation also. We also explored a variety of symmetric and asymmetric weighting lifters. from which it is found that a symmetric lifter of the form 1 + A sin1/2(πn/L), 0 ≤ n ≤ L − 1, for MFCC dimension L, is most effective. With regard to distance measure, the direct L2 norm is found adequate.",
"title": ""
},
{
"docid": "262be71d64eef2534fab547ec3db6b9a",
"text": "In the past few decades, the rise in attacks on communication devices in networks has resulted in a reduction of network functionality, throughput, and performance. To detect and mitigate these network attacks, researchers, academicians, and practitioners developed Intrusion Detection Systems (IDSs) with automatic response systems. The response system is considered an important component of IDS, since without a timely response IDSs may not function properly in countering various attacks, especially on a real-time basis. To respond appropriately, IDSs should select the optimal response option according to the type of network attack. This research study provides a complete survey of IDSs and Intrusion Response Systems (IRSs) on the basis of our in-depth understanding of the response option for different types of network attacks. Knowledge of the path from IDS to IRS can assist network administrators and network staffs in understanding how to tackle different attacks with state-of-the-art technologies.",
"title": ""
},
{
"docid": "001a01ee23ae07d4efd842024388ecd5",
"text": "Indian Railway Catering and Tourism Corporation Ltd. (IRCTC) launched “IRCTC Connect” mobile application (app) for different mobile platforms for booking/cancellation tickets, but the app usage rate is very low in comparison to IRCTC website and Passenger Reservation System (PRS). This indicates a gap between implementation and adaption of IRCTC Connect. This paper explores the factors influencing the consumer’s behavioral intention to use IRCTC Connect by adapting Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model. Regression analysis is used to analyze total 159 valid responses, collected through survey at MNNIT campus Allahabad, India. The findings of the study illustrate that only three factors Social Influence, Price Value and Habit of UTAUT2 model are significantly influencing the adoption of IRCTC Connect with adjusted R-Square value 0.699. This study will facilitate IRCTC Connect developers to encompass better understanding on consumers’ desires and intention and encourages researchers in this area for longitudinal observation in different backgrounds.",
"title": ""
},
{
"docid": "488110f56eee525ae4f06f21da795f78",
"text": "Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions. In the present work, we extend the usage of LRP to recurrent neural networks. We propose a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs. We apply our technique to a word-based bi-directional LSTM model on a five-class sentiment prediction task, and evaluate the resulting LRP relevances both qualitatively and quantitatively, obtaining better results than a gradient-based related method which was used in previous work.",
"title": ""
},
{
"docid": "2fdbe007690a844da8dc3cb306d077f8",
"text": "In this paper, we propose a structured image inpainting method employing an energy based model. In order to learn structural relationship between patterns observed in images and missing regions of the images, we employ an energy-based structured prediction method. The structural relationship is learned by minimizing an energy function which is defined by a simple convolutional neural network. The experimental results on various benchmark datasets show that our proposed method significantly outperforms the state-of-the-art methods which use Generative Adversarial Networks (GANs). We obtained 497.35 mean squared error (MSE) on the Olivetti face dataset compared to 833.0 MSE provided by the state-of-the-art method. Moreover, we obtained 28.4 dB peak signal to noise ratio (PSNR) on the SVHN dataset and 23.53 dB on the CelebA dataset, compared to 22.3 dB and 21.3 dB, provided by the state-of-the-art methods, respectively. The code is publicly available.11https:llgithub.com/cvlab-tohoku/DSEBImageInpainting.",
"title": ""
},
{
"docid": "13f43cf82f6322c2659f08b009c75076",
"text": "The revolution of Internet-of-Things (IoT) is reshaping the modern food supply chains with promising business prospects. To be successful in practice, the IoT solutions should create “income-centric” values beyond the conventional “traceability-centric” values. To accomplish what we promised to users, sensor portfolios and information fusion must correspond to the new requirements introduced by this income-centric value creation. In this paper, we propose a value-centric business-technology joint design framework. Based on it the income-centric added-values including shelf life prediction, sales premium, precision agriculture, and reduction of assurance cost are identified and assessed. Then corresponding sensor portfolios are developed and implemented. Three-tier information fusion architecture is proposed as well as examples about acceleration data processing, self-learning shelf life prediction and real-time supply chain re-planning. The feasibilities of the proposed design framework and solution have been confirmed by the field trials and an implemented prototype system.",
"title": ""
},
{
"docid": "18824d544bae9a0199a974bfac9ff4b8",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.07.005 ⇑ Tel.: +852 39438283; fax: +852 26035558. E-mail address: [email protected] Intelligent multi-camera video surveillance is a multidisciplinary field related to computer vision, pattern recognition, signal processing, communication, embedded computing and image sensors. This paper reviews the recent development of relevant technologies from the perspectives of computer vision and pattern recognition. The covered topics include multi-camera calibration, computing the topology of camera networks, multi-camera tracking, object re-identification, multi-camera activity analysis and cooperative video surveillance both with active and static cameras. Detailed descriptions of their technical challenges and comparison of different solutions are provided. It emphasizes the connection and integration of different modules in various environments and application scenarios. According to the most recent works, some problems can be jointly solved in order to improve the efficiency and accuracy. With the fast development of surveillance systems, the scales and complexities of camera networks are increasing and the monitored environments are becoming more and more complicated and crowded. This paper discusses how to face these emerging challenges. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d9791131cefcf0aa18befb25c12b65b2",
"text": "Medical record linkage is becoming increasingly important as clinical data is distributed across independent sources. To improve linkage accuracy we studied different name comparison methods that establish agreement or disagreement between corresponding names. In addition to exact raw name matching and exact phonetic name matching, we tested three approximate string comparators. The approximate comparators included the modified Jaro-Winkler method, the longest common substring, and the Levenshtein edit distance. We also calculated the combined root-mean square of all three. We tested each name comparison method using a deterministic record linkage algorithm. Results were consistent across both hospitals. At a threshold comparator score of 0.8, the Jaro-Winkler comparator achieved the highest linkage sensitivities of 97.4% and 97.7%. The combined root-mean square method achieved sensitivities higher than the Levenshtein edit distance or long-est common substring while sustaining high linkage specificity. Approximate string comparators increase deterministic linkage sensitivity by up to 10% compared to exact match comparisons and represent an accurate method of linking to vital statistics data.",
"title": ""
},
{
"docid": "0e5eb8191cea7d3a59f192aa32a214c4",
"text": "Recent neural models have shown significant progress on the problem of generating short descriptive texts conditioned on a small number of database records. In this work, we suggest a slightly more difficult data-to-text generation task, and investigate how effective current approaches are on this task. In particular, we introduce a new, large-scale corpus of data records paired with descriptive documents, propose a series of extractive evaluation methods for analyzing performance, and obtain baseline results using current neural generation methods. Experiments show that these models produce fluent text, but fail to convincingly approximate humangenerated documents. Moreover, even templated baselines exceed the performance of these neural models on some metrics, though copyand reconstructionbased extensions lead to noticeable improvements.",
"title": ""
},
{
"docid": "bcd0ec2406753bcb81a345bfbb628691",
"text": "Epigenetic information can be used to identify clinically relevant genomic variants single nucleotide polymorphisms (SNPs) of functional importance in cancer development. Super-enhancers are cell-specific DNA elements, acting to determine tissue or cell identity and driving tumor progression. Although previous approaches have been tried to explain risk associated with SNPs in regulatory DNA elements, so far epigenetic readers such as bromodomain containing protein 4 (BRD4) and super-enhancers have not been used to annotate SNPs. In prostate cancer (PC), androgen receptor (AR) binding sites to chromatin have been used to inform functional annotations of SNPs. Here we establish criteria for enhancer mapping which are applicable to other diseases and traits to achieve the optimal tissue-specific enrichment of PC risk SNPs. We used stratified Q-Q plots and Fisher test to assess the differential enrichment of SNPs mapping to specific categories of enhancers. We find that BRD4 is the key discriminant of tissue-specific enhancers, showing that it is more powerful than AR binding information to capture PC specific risk loci, and can be used with similar effect in breast cancer (BC) and applied to other diseases such as schizophrenia. This is the first study to evaluate the enrichment of epigenetic readers in genome-wide associations studies for SNPs within enhancers, and provides a powerful tool for enriching and prioritizing PC and BC genetic risk loci. Our study represents a proof of principle applicable to other diseases and traits that can be used to redefine molecular mechanisms of human phenotypic variation.",
"title": ""
},
{
"docid": "d79f92819d5485f2631897befd686416",
"text": "Information visualization is meant to support the analysis and comprehension of (often large) datasets through techniques intended to show/enhance features, patterns, clusters and trends, not always visible even when using a graphical representation. During the development of information visualization techniques the designer has to take into account the users' tasks to choose the graphical metaphor as well as the interactive methods to be provided. Testing and evaluating the usability of information visualization techniques are still a research question, and methodologies based on real or experimental users often yield significant results. To be comprehensive, however, experiments with users must rely on a set of tasks that covers the situations a real user will face when using the visualization tool. The present work reports and discusses the results of three case studies conducted as Multi-dimensional In-depth Long-term Case studies. The case studies were carried out to investigate MILCs-based usability evaluation methods for visualization tools.",
"title": ""
},
{
"docid": "c1b34059a896564df02ef984085b93a0",
"text": "Robotics has become a standard tool in outreaching to grades K-12 and attracting students to the STEM disciplines. Performing these activities in the class room usually requires substantial time commitment by the teacher and integration into the curriculum requires major effort, which makes spontaneous and short-term engagements difficult. This paper studies using “Cubelets”, a modular robotic construction kit, which requires virtually no setup time and allows substantial engagement and change of perception of STEM in as little as a 1-hour session. This paper describes the constructivist curriculum and provides qualitative and quantitative results on perception changes with respect to STEM and computer science in particular as a field of study.",
"title": ""
},
{
"docid": "6379e89db7d9063569a342ef2056307a",
"text": "Grounded Theory is a research method that generates theory from data and is useful for understanding how people resolve problems that are of concern to them. Although the method looks deceptively simple in concept, implementing Grounded Theory research can often be confusing in practice. Furthermore, despite many papers in the social science disciplines and nursing describing the use of Grounded Theory, there are very few examples and relevant guides for the software engineering researcher. This paper describes our experience using classical (i.e., Glaserian) Grounded Theory in a software engineering context and attempts to interpret the canons of classical Grounded Theory in a manner that is relevant to software engineers. We provide model to help the software engineering researchers interpret the often fuzzy definitions found in Grounded Theory texts and share our experience and lessons learned during our research. We summarize these lessons learned in a set of fifteen guidelines.",
"title": ""
},
{
"docid": "821cefef9933d6a02ec4b9098f157062",
"text": "Scientists debate whether people grow closer to their friends through social networking sites like Facebook, whether those sites displace more meaningful interaction, or whether they simply reflect existing ties. Combining server log analysis and longitudinal surveys of 3,649 Facebook users reporting on relationships with 26,134 friends, we find that communication on the site is associated with changes in reported relationship closeness, over and above effects attributable to their face-to-face, phone, and email contact. Tie strength increases with both one-on-one communication, such as posts, comments, and messages, and through reading friends' broadcasted content, such as status updates and photos. The effect is greater for composed pieces, such as comments, posts, and messages than for 'one-click' actions such as 'likes.' Facebook has a greater impact on non-family relationships and ties who do not frequently communicate via other channels.",
"title": ""
},
{
"docid": "c043e7a5d5120f5a06ef6decc06c184a",
"text": "Entities are further categorized into those that are the object of the measurement (‘assayed components’) and those, if any, that are subjected to targeted and controlled experimental interventions (‘perturbations/interventions’). These two core categories are related to the concepts ‘perturbagen’ and ‘target’ in the Bioassay Ontology (BAO2) and capture an important aspect of the design of experiments where multiple conditions are compared with each other in order to test whether a given perturbation (e.g., the presence or absence of a drug), causes a given response (e.g., a change in gene expression). Additional categories include ‘experimental variables’, ‘reporters’, ‘normalizing components’ and generic ‘biological components’ (Supplementary Data). We developed a web-based tool with a graphical user interface that allows computer-assisted manual extraction of the metadata model described above at the level of individual figure panels based on the information provided in figure legends and in the images. Files that contain raw or minimally processed data, when available, can furthermore be linked or uploaded and attached to the figure. As proof of principle, we have curated a compendium of over 18,000 experiments published across 23 journals. From the 721 papers processed, 381 papers were related to the field of autophagy, and the rest were annotated during the publication process of accepted manuscripts at four partner molecular biology journals. Both sets of papers were processed identically. Out of the 18,157 experimental panels annotated, 77% included at least one ‘intervention/assayed component’ pair, and this supported the broad applicability of the perturbation-centric SourceData model. We provide a breakdown of entities by categories in Supplementary Figure 1. We note that the presence of a perturbation is not a requirement for the model. As such, the SourceData model is also applicable in cases such as correlative observations. The SourceData model is independent of data type (i.e., image-based or numerical values) and is well suited for cell and molecular biology experiments. 77% of the processed entities were explicitly mentioned in the text of the legend. For the remaining entities, curators added the terms based on the labels directly displayed on the image of the figure. SourceData: a semantic platform for curating and searching figures",
"title": ""
},
{
"docid": "07db8f037ff720c8b8b242879c14531f",
"text": "PURPOSE\nMatriptase-2 (also known as TMPRSS6) is a critical regulator of the iron-regulatory hormone hepcidin in the liver; matriptase-2 cleaves membrane-bound hemojuvelin and consequently alters bone morphogenetic protein (BMP) signaling. Hemojuvelin and hepcidin are expressed in the retina and play a critical role in retinal iron homeostasis. However, no information on the expression and function of matriptase-2 in the retina is available. The purpose of the present study was to examine the retinal expression of matriptase-2 and its role in retinal iron homeostasis.\n\n\nMETHODS\nRT-PCR, quantitative PCR (qPCR), and immunofluorescence were used to analyze the expression of matriptase-2 and other iron-regulatory proteins in the mouse retina. Polarized localization of matriptase-2 in the RPE was evaluated using markers for the apical and basolateral membranes. Morphometric analysis of retinas from wild-type and matriptase-2 knockout (Tmprss6(msk/msk) ) mice was also performed. Retinal iron status in Tmprss6(msk/msk) mice was evaluated by comparing the expression levels of ferritin and transferrin receptor 1 between wild-type and knockout mice. BMP signaling was monitored by the phosphorylation status of Smads1/5/8 and expression levels of Id1 while interleukin-6 signaling was monitored by the phosphorylation status of STAT3.\n\n\nRESULTS\nMatriptase-2 is expressed in the mouse retina with expression detectable in all retinal cell types. Expression of matriptase-2 is restricted to the apical membrane in the RPE where hemojuvelin, the substrate for matriptase-2, is also present. There is no marked difference in retinal morphology between wild-type mice and Tmprss6(msk/msk) mice, except minor differences in specific retinal layers. The knockout mouse retina is iron-deficient, demonstrable by downregulation of the iron-storage protein ferritin and upregulation of transferrin receptor 1 involved in iron uptake. Hepcidin is upregulated in Tmprss6(msk/msk) mouse retinas, particularly in the neural retina. BMP signaling is downregulated while interleukin-6 signaling is upregulated in Tmprss6(msk/msk) mouse retinas, suggesting that the upregulaton of hepcidin in knockout mouse retinas occurs through interleukin-6 signaling and not through BMP signaling.\n\n\nCONCLUSIONS\nThe iron-regulatory serine protease matriptase-2 is expressed in the retina, and absence of this enzyme leads to iron deficiency and increased expression of hemojuvelin and hepcidin in the retina. The upregulation of hepcidin expression in Tmprss6(msk/msk) mouse retinas does not occur via BMP signaling but likely via the proinflammatory cytokine interleukin-6. We conclude that matriptase-2 is a critical participant in retinal iron homeostasis.",
"title": ""
},
{
"docid": "cf5e440f064656488506d90285c7885d",
"text": "A key issue in delay tolerant networks (DTN) is to find the right node to store and relay messages. We consider messages annotated with the unique keywords describing themessage subject, and nodes also adds keywords to describe their mission interests, priority and their transient social relationship (TSR). To offset resource costs, an incentive mechanism is developed over transient social relationships which enrich enroute message content and motivate better semantically related nodes to carry and forward messages. The incentive mechanism ensures avoidance of congestion due to uncooperative or selfish behavior of nodes.",
"title": ""
},
{
"docid": "fd5b9187c6720c3408b5c2324b03905d",
"text": "Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at https://github.com/bairdzhang/smallhardface.",
"title": ""
},
{
"docid": "9c7afcb568fab9551886174c3f4a329b",
"text": "Automatic semantic annotation of data from databases or the web is an important pre-process for data cleansing and record linkage. It can be used to resolve the problem of imperfect field alignment in a database or identify comparable fields for matching records from multiple sources. The annotation process is not trivial because data values may be noisy, such as abbreviations, variations or misspellings. In particular, overlapping features usually exist in a lexicon-based approach. In this work, we present a probabilistic address parser based on linear-chain conditional random fields (CRFs), which allow more expressive token-level features compared to hidden Markov models (HMMs). In additions, we also proposed two general enhancement techniques to improve the performance. One is taking original semi-structure of the data into account. Another is post-processing of the output sequences of the parser by combining its conditional probability and a score function, which is based on a learned stochastic regular grammar (SRG) that captures segment-level dependencies. Experiments were conducted by comparing the CRF parser to a HMM parser and a semi-Markov CRF parser in two real-world datasets. The CRF parser out-performed the HMM parser and the semi-Markov CRF in both datasets in terms of classification accuracy. Leveraging the structure of the data and combining the linear-chain CRF with the SRG further improved the parser to achieve an accuracy of 97% on a postal dataset and 96% on a company dataset.",
"title": ""
},
{
"docid": "7f1f2de5efadcd46d423257e9c21f3bb",
"text": "Physical layer security is an emerging technique to improve the wireless communication security, which is wide ly regarded as a complement to cryptographic technologies. To design physical layer security techniques under practical scenarios, the uncertainty and imperfections in the channel knowl edge need to be taken into consideration. This paper provides a survey of recent research and development in physical layer security considering the imperfect channel state informat ion (CSI) at communication nodes. We first present an overview of the main information-theoretic measures of the secrecy p erformance with imperfect CSI. Then, we describe several sign al processing enhancements in secure transmission designs, s uch as secure on-off transmission, beamforming with artificialnoise, and secure communication assisted by relay nodes or in cogni tive radio systems. The recent studies of physical layer securit y in large-scale decentralized wireless networks are also summ arized. Finally, the open problems for the on-going and future resea rch are discussed.",
"title": ""
}
] |
scidocsrr
|
756bc74c27f3456113e11efedce4e1f6
|
Word Embeddings based on Fixed-Size Ordinally Forgetting Encoding
|
[
{
"docid": "964af3f588eb025db7cedbe605d0268b",
"text": "In this paper, we propose the new fixedsize ordinally-forgetting encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation. FOFE can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words. In this work, we have applied FOFE to feedforward neural network language models (FNN-LMs). Experimental results have shown that without using any recurrent feedbacks, FOFE based FNNLMs can significantly outperform not only the standard fixed-input FNN-LMs but also the popular recurrent neural network (RNN) LMs.",
"title": ""
}
] |
[
{
"docid": "e134a35340fbf5f825d0d64108a171c3",
"text": "The present study investigated relations of anxiety sensitivity and other theoretically relevant personality factors to Copper's [Psychological Assessment 6 (1994) 117.] four categories of substance use motivations as applied to teens' use of alcohol, cigarettes, and marijuana. A sample of 508 adolescents (238 females, 270 males; mean age = 15.1 years) completed the Trait subscale of the State-Trait Anxiety Inventory for Children, the Childhood Anxiety Sensitivity Index (CASI), and the Intensity and Novelty subscales of the Arnett Inventory of Sensation Seeking. Users of each substance also completed the Drinking Motives Questionnaire-Revised (DMQ-R) and/or author-compiled measures for assessing motives for cigarette smoking and marijuana use, respectively. Multiple regression analyses revealed that, in the case of each drug, the block of personality variables predicted \"risky\" substance use motives (i.e., coping, enhancement, and/or conformity motives) over-and-above demographics. High intensity seeking and low anxiety sensitivity predicted enhancement motives for alcohol use, high anxiety sensitivity predicted conformity motives for alcohol and marijuana use, and high trait anxiety predicted coping motives for alcohol and cigarette use. Moreover, anxiety sensitivity moderated the relation between trait anxiety and coping motives for alcohol and cigarette use: the trait anxiety-coping motives relation was stronger for high, than for low, anxiety sensitive individuals. Implications of the findings for improving substance abuse prevention efforts for youth will be discussed.",
"title": ""
},
{
"docid": "268a86c25f1974630fada777790b162b",
"text": "The paper presents a novel method and system for personalised (individualised) modelling of spatio/spectro-temporal data (SSTD) and prediction of events. A novel evolving spiking neural network reservoir system (eSNNr) is proposed for the purpose. The system consists of: spike-time encoding module of continuous value input information into spike trains; a recurrent 3D SNNr; eSNN as an evolving output classifier. Such system is generated for every new individual, using existing data of similar individuals. Subject to proper training and parameter optimisation, the system is capable of accurate spatiotemporal pattern recognition (STPR) and of early prediction of individual events. The method and the system are generic, applicable to various SSTD and classification and prediction problems. As a case study, the method is applied for early prediction of occurrence of stroke on an individual basis. Preliminary experiments demonstrated a significant improvement in accuracy and time of event prediction when using the proposed method when compared with standard machine learning methods, such as MLR, SVM, MLP. Future development and applications are discussed.",
"title": ""
},
{
"docid": "6a6238bb56eacc7d8ecc8f15f753b745",
"text": "Privacy-preservation has emerged to be a major concern in devising a data mining system. But, protecting the privacy of data mining input does not guarantee a privacy-preserved output. This paper focuses on preserving the privacy of data mining output and particularly the output of classification task. Further, instead of static datasets, we consider the classification of continuously arriving data streams: a rapidly growing research area. Due to the challenges of data stream classification such as vast volume, a mixture of labeled and unlabeled instances throughout the stream and timely classifier publication, enforcing privacy-preservation techniques becomes even more challenging. In order to achieve this goal, we propose a systematic method for preserving output-privacy in data stream classification that addresses several applications like loan approval, credit card fraud detection, disease outbreak or biological attack detection. Specifically, we propose an algorithm named Diverse and k-Anonymized HOeffding Tree (DAHOT) that is an amalgamation of popular data stream classification algorithm Hoeffding tree and a variant of k-anonymity and l-diversity principles. The empirical results on real and synthetic data streams verify the effectiveness of DAHOT as compared to its bedrock Hoeffding tree and two other techniques, one that learns sanitized decision trees from sampled data stream and other technique that uses ensemble-based classification. DAHOT guarantees to preserve the private patterns while classifying the data streams accurately.",
"title": ""
},
{
"docid": "379bc1336026fab6225e39b6c69d55a0",
"text": "We show that a recurrent neural network is able to learn a model to represent sequences of communications between computers on a network and can be used to identify outlier network traffic. Defending computer networks is a challenging problem and is typically addressed by manually identifying known malicious actor behavior and then specifying rules to recognize such behavior in network communications. However, these rule-based approaches often generalize poorly and identify only those patterns that are already known to researchers. An alternative approach that does not rely on known malicious behavior patterns can potentially also detect previously unseen patterns. We tokenize and compress netflow into sequences of “words” that form “sentences” representative of a conversation between computers. These sentences are then used to generate a model that learns the semantic and syntactic grammar of the newly generated language. We use Long-Short-Term Memory (LSTM) cell Recurrent Neural Networks (RNN) to capture the complex relationships and nuances of this language. The language model is then used predict the communications between two IPs and the prediction error is used as a measurement of how typical or atyptical the observed communication are. By learning a model that is specific to each network, yet generalized to typical computer-to-computer traffic within and outside the network, a language model is able to identify sequences of network activity that are outliers with respect to the model. We demonstrate positive unsupervised attack identification performance (AUC 0.84) on the ISCX IDS dataset which contains seven days of network activity with normal traffic and four distinct attack patterns.",
"title": ""
},
{
"docid": "d51408ad40bdc9a3a846aaf7da907cef",
"text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.",
"title": ""
},
{
"docid": "e1ba35e1558540c1b99abf1e05e927fc",
"text": "Device-to-device (D2D) communication underlaying cellular networks brings significant benefits to resource utilization, improving user's throughput and extending battery life of user equipments. However, the allocation of radio resources and power to D2D communication needs elaborate coordination, as D2D communication causes interference to cellular networks. In this paper, we propose a novel joint radio resource and power allocation scheme to improve the performance of the system in the uplink period. Energy efficiency is considered as our optimization objective since devices are handheld equipments with limited battery life. We formulate the the allocation problem as a reverse iterative combinatorial auction game. In the auction, radio resources occupied by cellular users are considered as bidders competing for D2D packages and their corresponding transmit power. We propose an algorithm to solve the allocation problem as an auction game. We also perform numerical simulations to prove the efficacy of the proposed algorithm.",
"title": ""
},
{
"docid": "45233b0580decd90135922ee8991791c",
"text": "In this paper, we present an object recognition and pose estimation framework consisting of a novel global object descriptor, so called Viewpoint oriented Color-Shape Histogram (VCSH), which combines object's color and shape information. During the phase of object modeling and feature extraction, the whole object's color point cloud model is built by registration from multi-view color point clouds. VCSH is trained using partial-view object color point clouds generated from different synthetic viewpoints. During the recognition phase, the object is identified and the closest viewpoint is extracted using the built feature database and object's features from real scene. The estimated closest viewpoint provides a good initialization for object pose estimation optimization using the iterative closest point strategy. Finally, objects in real scene are recognized and their accurate poses are retrieved. A set of experiments is realized where our proposed approach is proven to outperform other existing methods by guaranteeing highly accurate object recognition, fast and accurate pose estimation as well as exhibiting the capability of dealing with environmental illumination changes.",
"title": ""
},
{
"docid": "1fb748012ff900e14861e2b536fbd44c",
"text": "This paper describes the use of data mining techniques to solve three important issues in network intrusion detection problems. The first goal is finding the best dimensionality reduction algorithm which reduces the computational cost while still maintains the accuracy. We implement both feature extraction (Principal Component Analysis and Independent Component Analysis) and feature selection (Genetic Algorithm and Particle Swarm Optimization) techniques for dimensionality reduction. The second goal is finding the best algorithm for misuse detection system to detect known intrusion. We implement four basic machine learning algorithms (Naïve Bayes, Decision Tree, Nearest Neighbour and Rule Induction) and then apply ensemble algorithms such as bagging, boosting and stacking to improve the performance of these four basic algorithms. The third goal is finding the best clustering algorithms to detect network anomalies which contains unknown intrusion. We analyze and compare the performance of four unsupervised clustering algorithms (k-Means, k-Medoids, EM clustering and distance-based outlier detection) in terms of accuracy and false positives. Our experiment shows that the Nearest Neighbour (NN) classifier when implemented with Particle Swarm Optimization (PSO) as an attribute selection algorithm achieved the best performance, which is 99.71% accuracy and 0.27% false positive. The misuse detection technique achieves a very good performance with more than 99% accuracy when detecting known intrusion but it fails to accurately detect data set with a large number of unknown intrusions where the highest accuracy is only 63.97%. In contrast, the anomaly detection approach shows promising results where the distance-based outlier detection method outperforms the other three clustering algorithms with the accuracy of 80.15%, followed by EM clustering (78.06%), k-Medoids (76.71%), improved k-Means (65.40%) and k-Means (57.81%).",
"title": ""
},
{
"docid": "faac043b0c32bad5a44d52b93e468b78",
"text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.",
"title": ""
},
{
"docid": "40b129a9960e3d9dc51fa5fbe48eecbc",
"text": "We report the first case of tinea corporis bullosa due to Trichophyton schoenleinii in a 41-year-old Romanian woman, without any involvement of the scalp and hair. The species identification was performed using macroscopic and microscopic features of the dermatophyte and its physiological abilities. Epidemiological aspects of the case are also discussed. The general treatment with terbinafine and topical applications of ciclopiroxolamine cream have led to complete healing, with the lesions disappearing in 2 weeks.",
"title": ""
},
{
"docid": "4628128d1c5cf97fa538a8b750905632",
"text": "A large body of recent work on object detection has focused on exploiting 3D CAD model databases to improve detection performance. Many of these approaches work by aligning exact 3D models to images using templates generated from renderings of the 3D models at a set of discrete viewpoints. However, the training procedures for these approaches are computationally expensive and require gigabytes of memory and storage, while the viewpoint discretization hampers pose estimation performance. We propose an efficient method for synthesizing templates from 3D models that runs on the fly - that is, it quickly produces detectors for an arbitrary viewpoint of a 3D model without expensive dataset-dependent training or template storage. Given a 3D model and an arbitrary continuous detection viewpoint, our method synthesizes a discriminative template by extracting features from a rendered view of the object and decorrelating spatial dependences among the features. Our decorrelation procedure relies on a gradient-based algorithm that is more numerically stable than standard decomposition-based procedures, and we efficiently search for candidate detections by computing FFT-based template convolutions. Due to the speed of our template synthesis procedure, we are able to perform joint optimization of scale, translation, continuous rotation, and focal length using Metropolis-Hastings algorithm. We provide an efficient GPU implementation of our algorithm, and we validate its performance on 3D Object Classes and PASCAL3D+ datasets.",
"title": ""
},
{
"docid": "d1ac9b5ba4b9bcc5be901e0b93664088",
"text": "Autonomous robot is a robot that can perform certain work independently without the human help. Autonomous of navigation is one of the capabilities of autonomous robot to move from one point to another. Implementation of Autonomous robot navigation to explore an unknown environment, requires the robot to explore and map the environment and seek the path to reach a certain point. Path Finding Robot is a mobile robot which moves using wheels with differential steering type. This path finding robot is designed to solve a maze environment that has a size of 5 x 5 cells and it is based on the flood-fill algorithm. Detection of walls and opening in the maze were done using ultrasonic range-finders. The robot was able to learn the maze, find all possible routes and solve it using the shortest one. This robot also use wall follower algorithms to correct the position of the robot against the side wall maze, so the robot can move straight. After several experiments, the robot can explore and map of maze and find the shortest path to destination point with a success rate of 70%.",
"title": ""
},
{
"docid": "c39e29509cbfacc776aeb8733f55d90e",
"text": "Recently, innovative technology like Trackman has made it possible to generate data describing golf swings. In this application paper, we analyze Trackman data from 275 golfers using descriptive statistics and machine learning techniques. The overall goal is to find non-trivial and general patterns in the data that can be used to identify and explain what separates skilled golfers from poor. Experimental results show that random forest models, generated from Trackman data, were able to predict the handicap of a golfer, with a performance comparable to human experts. Based on interpretable predictive models, descriptive statistics and correlation analysis, the most distinguishing property of better golfers is their consistency. In addition, the analysis shows that better players have superior control of the club head at impact and generally hit the ball straighter. A very interesting finding is that better players also tend to swing flatter. Finally, an outright comparison between data describing the club head movement and ball flight data, indicates that a majority of golfers do not hit the ball solid enough for the basic golf theory to apply.",
"title": ""
},
{
"docid": "23ac5c4adf61fad813869882c4d2e7b6",
"text": "Most network simulators do not support security features. In this paper, we introduce a new security module for OMNET++ that implements the IEEE 802.15.4 security suite. This module, developed using the C++ language, can simulate all devices and sensors that implement the IEEE 802.15.4 standard. The OMNET++ security module is also evaluated in terms of quality of services in the presence of physical hop attacks. Results show that our module is reliable and can safely be used by researchers.",
"title": ""
},
{
"docid": "9128e3786ba8d0ab36aa2445d84de91c",
"text": "A technique for the correction of flat or inverted nipples is presented. The procedure is a combination of the square flap method, which better shapes the corrected nipple, and the dermal sling, which provides good support for the repaired nipple.",
"title": ""
},
{
"docid": "004a9fcd8a447f8601b901cff338f133",
"text": "Hybrid precoding has been recently proposed as a cost-effective transceiver solution for millimeter wave systems. While the number of radio frequency chains has been effectively reduced in existing works, a large number of high-precision phase shifters are still needed. Practical phase shifters are with coarsely quantized phases, and their number should be reduced to a minimum due to cost and power consideration. In this paper, we propose a novel hardware-efficient implementation for hybrid precoding, called the fixed phase shifter (FPS) implementation. It only requires a small number of phase shifters with quantized and fixed phases. To enhance the spectral efficiency, a switch network is put forward to provide dynamic connections from phase shifters to antennas, which is adaptive to the channel states. An effective alternating minimization algorithm is developed with closed-form solutions in each iteration to determine the hybrid precoder and the states of switches. Moreover, to further reduce the hardware complexity, a group-connected mapping strategy is proposed to reduce the number of switches. Simulation results show that the FPS fully-connected hybrid precoder achieves higher hardware efficiency with much fewer phase shifters than existing proposals. Furthermore, the group-connected mapping achieves a good balance between spectral efficiency and hardware complexity.",
"title": ""
},
{
"docid": "823b77f034b0f3047760b3ed6a0e2489",
"text": "Current social media products such as Facebook and Twitter have not sufficiently addressed how to help users organize people and content streams across different areas of their lives. We conducted a qualitative design research study to explore how we might best leverage natural models of social organization to improve experiences of social media. We found that participants organize their social worlds based on life 'modes', i.e., family, work and social. They strategically use communication technologies to manage intimacy levels within these modes, and levels of permeability through the boundaries between these modes. Mobile communication in particular enabled participants to aggregate and share content dynamically across life modes. While exploring problems with managing their social media streams, people showed a strong need for focused sharing - the ability to share content only with appropriate audiences within certain areas of life.",
"title": ""
},
{
"docid": "23bc28928a00ba437660efcb1d93c1a8",
"text": "Mental disorders occur in people in all countries, societies and in all ethnic groups, regardless socio-economic order with more frequent anxiety disorders. Through the process of time many treatment have been applied in order to address this complex mental issue. People with anxiety disorders can benefit from a variety of treatments and services. Following an accurate diagnosis, possible treatments include psychological treatments and mediation. Complementary and alternative medicine (CAM) plays a significant role in health care systems. Patients with chronic pain conditions, including arthritis, chronic neck and backache, headache, digestive problems and mental health conditions (including insomnia, depression, and anxiety) were high users of CAM therapies. Aromatherapy is a holistic method of treatment, using essential oils. There are several essential oils that can help in reducing anxiety disorders and as a result the embodied events that they may cause.",
"title": ""
},
{
"docid": "09cffaca68a254f591187776e911d36e",
"text": "Signaling across cellular membranes, the 826 human G protein-coupled receptors (GPCRs) govern a wide range of vital physiological processes, making GPCRs prominent drug targets. X-ray crystallography provided GPCR molecular architectures, which also revealed the need for additional structural dynamics data to support drug development. Here, nuclear magnetic resonance (NMR) spectroscopy with the wild-type-like A2A adenosine receptor (A2AAR) in solution provides a comprehensive characterization of signaling-related structural dynamics. All six tryptophan indole and eight glycine backbone 15N-1H NMR signals in A2AAR were individually assigned. These NMR probes provided insight into the role of Asp522.50 as an allosteric link between the orthosteric drug binding site and the intracellular signaling surface, revealing strong interactions with the toggle switch Trp 2466.48, and delineated the structural response to variable efficacy of bound drugs across A2AAR. The present data support GPCR signaling based on dynamic interactions between two semi-independent subdomains connected by an allosteric switch at Asp522.50.",
"title": ""
},
{
"docid": "741e0f73b414b5eef1ce44bbfdb33646",
"text": "Organizing Web services into functionally similar clusters, is an efficient approach to discovering Web services efficiently. An important aspect of the clustering process is calculating the semantic similarity of Web services. Most current clustering approaches are based on similarity-distance measurement, including keyword, ontology and information-retrieval-based methods. Problems with these approaches include a shortage of high quality ontologies and a loss of semantic information. In addition, there has been little fine-grained improvement in existing approaches to service clustering. In this paper, we present a new approach to grouping Web services into functionally similar clusters by mining Web service documents and generating an ontology via hidden semantic patterns present within the complex terms used in service features to measure similarity. If calculating the similarity using the generated ontology fails, the similarity is calculated by using an information-retrieval-based term-similarity method that adopts term-similarity measuring techniques used by thesaurus and search engines. Another important aspect of high performance in clustering is identifying the most suitable cluster center. To improve the utility of clusters, we propose an approach to identifying the cluster center that combines service similarity with the term frequency-inverse document frequency values of service names. Experimental results show that our clustering approach performs better than existing approaches.",
"title": ""
}
] |
scidocsrr
|
0e00383e9e9c94f96a7df024dd09e5c1
|
Blepharophimosis, ptosis, epicanthus inversus syndrome with translocation and deletion at chromosome 3q23 in a black African female.
|
[
{
"docid": "3a29bbe76a53c8284123019eba7e0342",
"text": "Although von Ammon' first used the term blepharphimosis in 1841, it was Vignes2 in 1889 who first associated blepharophimosis with ptosis and epicanthus inversus. In 1921, Dimitry3 reported a family in which there were 21 affected subjects in five generations. He described them as having ptosis alone and did not specify any other features, although photographs in the report show that they probably had the full syndrome. Dimitry's pedigree was updated by Owens et a/ in 1960. The syndrome appeared in both sexes and was transmitted as a Mendelian dominant. In 1935, Usher5 reviewed the reported cases. By then, 26 pedigrees had been published with a total of 175 affected persons with transmission mainly through affected males. There was no consanguinity in any pedigree. In three pedigrees, parents who obviously carried the gene were unaffected. Well over 150 families have now been reported and there is no doubt about the autosomal dominant pattern of inheritance. However, like Usher,5 several authors have noted that transmission is mainly through affected males and less commonly through affected females.4 6 Reports by Moraine et al7 and Townes and Muechler8 have described families where all affected females were either infertile with primary or secondary amenorrhoea or had menstrual irregularity. Zlotogora et a/9 described one family and analysed 38 families reported previously. They proposed the existence of two types: type I, the more common type, in which the syndrome is transmitted by males only and affected females are infertile, and type II, which is transmitted by both affected females and males. There is male to male transmission in both types and both are inherited as an autosomal dominant trait. They found complete penetrance in type I and slightly reduced penetrance in type II.",
"title": ""
}
] |
[
{
"docid": "c8634e3256cfafeec5232a37f141edf0",
"text": "This paper proposes a novel memory-based online video representation that is efficient, accurate and predictive. This is in contrast to prior works that often rely on computationally heavy 3D convolutions, ignore actual motion when aligning features over time, or operate in an off-line mode to utilize future frames. In particular, our memory (i) holds the feature representation, (ii) is spatially warped over time to compensate for observer and scene motions, (iii) can carry long-term information, and (iv) enables predicting feature representations in future frames. By exploring a variant that operates at multiple temporal scales, we efficiently learn across even longer time horizons. We apply our online framework to object detection in videos, obtaining a large 2.3 times speed-up and losing only 0.9% mAP on ImageNet-VID dataset, compared to prior works that even use future frames. Finally, we demonstrate the predictive property of our representation in two novel detection setups, where features are propagated over time to (i) significantly enhance a real-time detector by more than 10% mAP in a multi-threaded online setup and to (ii) anticipate objects in future frames.",
"title": ""
},
{
"docid": "4aa4f059e626239bb54c2e9d2a3c3005",
"text": "INTRODUCTION\nSequential stages in the development of the hand, wrist, and cervical vertebrae commonly are used to assess maturation and predict the timing of the adolescent growth spurt. This approach is predicated on the idea that forecasts based on skeletal age must, of necessity, be superior to those based on chronologic age. This study was undertaken to test this reasonable, albeit largely unproved, assumption in a large, longitudinal sample.\n\n\nMETHODS\nSerial records of 100 children (50 girls, 50 boys) were chosen from the files of the Bolton-Brush Growth Study Center in Cleveland, Ohio. The 100 series were 6 to 11 years in length, a span that was designed to encompass the onset and the peak of the adolescent facial growth spurt in each subject. Five linear cephalometric measurements (S-Na, Na-Me, PNS-A, S-Go, Go-Pog) were summed to characterize general facial size; a sixth (Co-Gn) was used to assess mandibular length. In all, 864 cephalograms were traced and analyzed. For most years, chronologic age, height, and hand-wrist films were available, thereby permitting various alternative methods of maturational assessment and prediction to be tested. The hand-wrist and the cervical vertebrae films for each time point were staged. Yearly increments of growth for stature, face, and mandible were calculated and plotted against chronologic age. For each subject, the actual age at onset and peak for stature and facial and mandibular size served as the gold standards against which key ages inferred from other methods could be compared.\n\n\nRESULTS\nOn average, the onset of the pubertal growth spurts in height, facial size, and mandibular length occurred in girls at 9.3, 9.8, and 9.5 years, respectively. The difference in timing between height and facial size growth spurts was statistically significant. In boys, the onset for height, facial size, and mandibular length occurred more or less simultaneously at 11.9, 12.0, and 11.9 years, respectively. In girls, the peak of the growth spurt in height, facial size, and mandibular length occurred at 10.9, 11.5, and 11.5 years. Height peaked significantly earlier than both facial size and mandibular length. In boys, the peak in height occurred slightly (but statistically significantly) earlier than did the peaks in the face and mandible: 14.0, 14.4, and 14.3 years. Based on rankings, the hand-wrist stages provided the best indication (lowest root mean squared error) that maturation had advanced to the peak velocity stage. Chronologic age, however, was nearly as good, whereas the vertebral stages were consistently the worst. Errors from the use of statural onset to predict the peak of the pubertal growth spurt in height, facial size, and mandibular length were uniformly lower than for predictions based on the cervical vertebrae. Chronologic age, especially in boys, was a close second.\n\n\nCONCLUSIONS\nThe common assumption that onset and peak occur at ages 12 and 14 years in boys and 10 and 12 years in girls seems correct for boys, but it is 6 months to 1 year late for girls. As an index of maturation, hand-wrist skeletal ages appear to offer the best indication that peak growth velocity has been reached. Of the methods tested here for the prediction of the timing of peak velocity, statural onset had the lowest errors. Although mean chronologic ages were nearly as good, stature can be measured repeatedly and thus might lead to improved prediction of the timing of the adolescent growth spurt.",
"title": ""
},
{
"docid": "cc6111093376f0bae267fe686ecd22cd",
"text": "This paper overviews the diverse information technologies that are used to provide athletes with relevant feedback. Examples taken from various sports are used to illustrate selected applications of technology-based feedback. Several feedback systems are discussed, including vision, audition and proprioception. Each technology described here is based on the assumption that feedback would eventually enhance skill acquisition and sport performance and, as such, its usefulness to athletes and coaches in training is critically evaluated.",
"title": ""
},
{
"docid": "9b44952749ebfdb356ab98843299e788",
"text": "The null space of the within-class scatter matrix is found to express most discriminative information for the small sample size problem (SSSP). The null space-based LDA takes full advantage of the null space while the other methods remove the null space. It proves to be optimal in performance. From the theoretical analysis, we present the NLDA algorithm and the most suitable situation for NLDA. Our method is simpler than all other null space approaches, it saves the computational cost and maintains the performance simultaneously. Furthermore, kernel technique is incorporated into discriminant analysis in the null space. Firstly, all samples are mapped to the kernel space through a better kernel function, called Cosine kernel, which is proposed to increase the discriminating capability of the original polynomial kernel function. Secondly, a truncated NLDA is employed. The novel approach only requires one eigenvalue analysis and is also applicable to the large sample size problem. Experiments are carried out on different face data sets to demonstrate the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "8326f993dbb83e631d2e6892e03520e7",
"text": "Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires experience. This article documents a new NASA initiative to build a centralized repository of software defect data; in particular, it documents one specific case study on software metrics. Software metrics are used as a basis for prediction of errors in code modules, but there are many different metrics available. McCabe is one of the more popular tools used to produce metrics, but, as will be shown in this paper, other metrics can be more significant.",
"title": ""
},
{
"docid": "55b4e5cfd3d162065d15f8f814c20e1e",
"text": "BACKGROUND\nResearchers have demonstrated moderate evidence for the use of exercise in the treatment of subacromial impingement syndrome (SAIS). Recent evidence also supports eccentric exercise for patients with lower extremity and wrist tendinopathies. However, only a few investigators have examined the effects of eccentric exercise on patients with rotator cuff tendinopathy.\n\n\nPURPOSE\nTo compare the effectiveness of an eccentric progressive resistance exercise (PRE) intervention to a concentric PRE intervention in adults with SAIS.\n\n\nSTUDY DESIGN\nRandomized Clinical Trial.\n\n\nMETHODS\nThirty-four participants with SAIS were randomized into concentric (n = 16, mean age: 48.6 ± 14.6 years) and eccentric (n = 18, mean age: 50.1 ± 16.9 years) exercise groups. Supervised rotator cuff and scapular PRE's were performed twice a week for eight weeks. A daily home program of shoulder stretching and active range of motion (AROM) exercises was performed by both groups. The outcome measures of the Disabilities of the Arm, Shoulder, and Hand (DASH) score, pain-free arm scapular plane elevation AROM, pain-free shoulder abduction and external rotation (ER) strength were assessed at baseline, week five, and week eight of the study.\n\n\nRESULTS\nFour separate 2x3 ANOVAs with repeated measures showed no significant difference in any outcome measure between the two groups over time. However, all participants made significant improvements in all outcome measures from baseline to week five (p < 0.0125). Significant improvements also were found from week five to week eight (p < 0.0125) for all outcome measures except scapular plane elevation AROM.\n\n\nCONCLUSION\nBoth eccentric and concentric PRE programs resulted in improved function, AROM, and strength in patients with SAIS. However, no difference was found between the two exercise modes, suggesting that therapists may use exercises that utilize either exercise mode in their treatment of SAIS.\n\n\nLEVEL OF EVIDENCE\nTherapy, level 1b.",
"title": ""
},
{
"docid": "d62bded822aff38333a212ed1853b53c",
"text": "The design of an activity recognition and monitoring system based on the eWatch, multi-sensor platform worn on different body positions, is presented in this paper. The system identifies the user's activity in realtime using multiple sensors and records the classification results during a day. We compare multiple time domain feature sets and sampling rates, and analyze the tradeoff between recognition accuracy and computational complexity. The classification accuracy on different body positions used for wearing electronic devices was evaluated",
"title": ""
},
{
"docid": "8c24f4e178ebe403da3f90f05b97ac17",
"text": "The success of the Human Genome Project and the powerful tools of molecular biology have ushered in a new era of medicine and nutrition. The pharmaceutical industry expects to leverage data from the Human Genome Project to develop new drugs based on the genetic constitution of the patient; likewise, the food industry has an opportunity to position food and nutritional bioactives to promote health and prevent disease based on the genetic constitution of the consumer. This new era of molecular nutrition--that is, nutrient-gene interaction--can unfold in dichotomous directions. One could focus on the effects of nutrients or food bioactives on the regulation of gene expression (ie, nutrigenomics) or on the impact of variations in gene structure on one's response to nutrients or food bioactives (ie, nutrigenetics). The challenge of the public health nutritionist will be to balance the needs of the community with those of the individual. In this regard, the excitement and promise of molecular nutrition should be tempered by the need to validate the scientific data emerging from the disciplines of nutrigenomics and nutrigenetics and the need to educate practitioners and communicate the value to consumers-and to do it all within a socially responsible bioethical framework.",
"title": ""
},
{
"docid": "f1a36f7fd6b3cf42415c483f6ade768e",
"text": "The current paradigm of genomic studies of complex diseases is association and correlation analysis. Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the identified genetic variants by GWAS can only explain a small proportion of the heritability of complex diseases. A large fraction of genetic variants is still hidden. Association analysis has limited power to unravel mechanisms of complex diseases. It is time to shift the paradigm of genomic analysis from association analysis to causal inference. Causal inference is an essential component for the discovery of mechanism of diseases. This paper will review the major platforms of the genomic analysis in the past and discuss the perspectives of causal inference as a general framework of genomic analysis. In genomic data analysis, we usually consider four types of associations: association of discrete variables (DNA variation) with continuous variables (phenotypes and gene expressions), association of continuous variables (expressions, methylations, and imaging signals) with continuous variables (gene expressions, imaging signals, phenotypes, and physiological traits), association of discrete variables (DNA variation) with binary trait (disease status) and association of continuous variables (gene expressions, methylations, phenotypes, and imaging signals) with binary trait (disease status). In this paper, we will review algorithmic information theory as a general framework for causal discovery and the recent development of statistical methods for causal inference on discrete data, and discuss the possibility of extending the association analysis of discrete variable with disease to the causal analysis for discrete variable and disease.",
"title": ""
},
{
"docid": "09b77e632fb0e5dfd7702905e51fc706",
"text": "Most natural videos contain numerous events. For example, in a video of a “man playing a piano”, the video might also contain “another man dancing” or “a crowd clapping”. We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with its unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.",
"title": ""
},
{
"docid": "011f6529db0dc1dfed11033ed3786759",
"text": "Most modern face super-resolution methods resort to convolutional neural networks (CNN) to infer highresolution (HR) face images. When dealing with very low resolution (LR) images, the performance of these CNN based methods greatly degrades. Meanwhile, these methods tend to produce over-smoothed outputs and miss some textural details. To address these challenges, this paper presents a wavelet-based CNN approach that can ultra-resolve a very low resolution face image of 16 × 16 or smaller pixelsize to its larger version of multiple scaling factors (2×, 4×, 8× and even 16×) in a unified framework. Different from conventional CNN methods directly inferring HR images, our approach firstly learns to predict the LR’s corresponding series of HR’s wavelet coefficients before reconstructing HR images from them. To capture both global topology information and local texture details of human faces, we present a flexible and extensible convolutional neural network with three types of loss: wavelet prediction loss, texture loss and full-image loss. Extensive experiments demonstrate that the proposed approach achieves more appealing results both quantitatively and qualitatively than state-ofthe- art super-resolution methods.",
"title": ""
},
{
"docid": "35b668eeecb71fc1931e139a90f2fd1f",
"text": "In this article we present novel learning methods for estimating the quality of results returned by a search engine in response to a query. Estimation is based on the agreement between the top results of the full query and the top results of its sub-queries. We demonstrate the usefulness of quality estimation for several applications, among them improvement of retrieval, detecting queries for which no relevant content exists in the document collection, and distributed information retrieval. Experiments on TREC data demonstrate the robustness and the effectiveness of our learning algorithms.",
"title": ""
},
{
"docid": "47dc7c546c4f0eb2beb1b251ef9e4a81",
"text": "In this paper we describe AMT, a tool for monitoring temporal properties of continuous signals. We first introduce S TL /PSL, a specification formalism based on the industrial standard language P SL and the real-time temporal logic MITL , extended with constructs that allow describing behaviors of real-valued variables. The tool automatically builds property observers from an STL /PSL specification and checks, in an offlineor incrementalfashion, whether simulation traces satisfy the property. The AMT tool is validated through a Fla sh memory case-study.",
"title": ""
},
{
"docid": "e4e58d00ffdfcc881c0ea934ca6152f2",
"text": "Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing linear-time model-checking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automata-theoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for model-checking. Recently, Bernholtz and Grumberg [1993] have shown that this exponential penalty can be avoided, though they did not match the linear complexity of non-automata-theoretic algorithms. In this paper, we show that alternating tree automata are the key to a comprehensive automata-theoretic framework for branching temporal logics. Not only can they be used to obtain optimal decision procedures, as was shown by Muller et al., but, as we show here, they also make it possible to derive optimal model-checking algorithms. Moreover, the simple combinatorial structure that emerges from the automata-theoretic approach opens up new possibilities for the implementation of branching-time model checking and has enabled us to derive improved space complexity bounds for this long-standing problem.",
"title": ""
},
{
"docid": "e7bbef4600048504c8019ff7fdb4758c",
"text": "Convenient assays for superoxide dismutase have necessarily been of the indirect type. It was observed that among the different methods used for the assay of superoxide dismutase in rat liver homogenate, namely the xanthine-xanthine oxidase ferricytochromec, xanthine-xanthine oxidase nitroblue tetrazolium, and pyrogallol autoxidation methods, a modified pyrogallol autoxidation method appeared to be simple, rapid and reproducible. The xanthine-xanthine oxidase ferricytochromec method was applicable only to dialysed crude tissue homogenates. The xanthine-xanthine oxidase nitroblue tetrazolium method, either with sodium carbonate solution, pH 10.2, or potassium phosphate buffer, pH 7·8, was not applicable to rat liver homogenate even after extensive dialysis. Using the modified pyrogallol autoxidation method, data have been obtained for superoxide dismutase activity in different tissues of rat. The effect of age, including neonatal and postnatal development on the activity, as well as activity in normal and cancerous human tissues were also studied. The pyrogallol method has also been used for the assay of iron-containing superoxide dismutase inEscherichia coli and for the identification of superoxide dismutase on polyacrylamide gels after electrophoresis.",
"title": ""
},
{
"docid": "f44718a0831c9eaa5c73256c6ce31231",
"text": "Plasma concentrations of adiponectin, a novel adipose-specific protein with putative antiatherogenic and antiinflammatory effects, were found to be decreased in Japanese individuals with obesity, type 2 diabetes, and cardiovascular disease, conditions commonly associated with insulin resistance and hyperinsulinemia. To further characterize the relationship between adiponectinemia and adiposity, insulin sensitivity, insulinemia, and glucose tolerance, we measured plasma adiponectin concentrations, body composition (dual-energy x-ray absorptiometry), insulin sensitivity (M, hyperinsulinemic clamp), and glucose tolerance (75-g oral glucose tolerance test) in 23 Caucasians and 121 Pima Indians, a population with a high propensity for obesity and type 2 diabetes. Plasma adiponectin concentration was negatively correlated with percent body fat (r = -0.43), waist-to-thigh ratio (r = -0.46), fasting plasma insulin concentration (r = -0.63), and 2-h glucose concentration (r = -0.38), and positively correlated with M (r = 0.59) (all P < 0.001); all relations were evident in both ethnic groups. In a multivariate analysis, fasting plasma insulin concentration, M, and waist-to-thigh ratio, but not percent body fat or 2-h glucose concentration, were significant independent determinates of adiponectinemia, explaining 47% of the variance (r(2) = 0.47). Differences in adiponectinemia between Pima Indians and Caucasians (7.2 +/- 2.6 vs. 10.2 +/- 4.3 microg/ml, P < 0.0001) and between Pima Indians with normal, impaired, and diabetic glucose tolerance (7.5 +/- 2.7, 6.1 +/- 2.0, 5.5 +/- 1.6 microg/ml, P < 0.0001) remained significant after adjustment for adiposity, but not after additional adjustment for M or fasting insulin concentration. These results confirm that obesity and type 2 diabetes are associated with low plasma adiponectin concentrations in different ethnic groups and indicate that the degree of hypoadiponectinemia is more closely related to the degree of insulin resistance and hyperinsulinemia than to the degree of adiposity and glucose intolerance.",
"title": ""
},
{
"docid": "ae961e9267b1571ec606347f56b0d4ca",
"text": "A benchmark turbulent Backward Facing Step (BFS) airflow was studied in detail through a program of tightly coupled experimental and CFD analysis. The theoretical and experimental approaches were developed simultaneously in a “building block” approach and the results used to verify each “block”. Information from both CFD and experiment was used to develop confidence in the accuracy of each technique and to increase our understanding of the BFS flow.",
"title": ""
},
{
"docid": "dcb79661bc3c89541555be00c7d3d33a",
"text": "With the advent of different kinds of wireless networks and smart phones, Cellular network users are provided with various data connectivity options by Network Service Providers (ISPs) abiding to Service Level Agreement, i.e. regarding to Quality of Service (QoS) of network deployed. Network Performance Metrics (NPMs) are needed to measure the network performance and guarantee the QoS Parameters like Availability, delivery, latency, bandwidth, etc. Two way active measurement protocol (TWAMP) is widely prevalent active measurement approach to measure two-way metrics of networks. In this work, software tool is developed, that enables network user to assess the network performance. There is dearth of tools, which can measure the network performance of wireless networks like Wi-Fi, 3G, etc., Therefore proprietary TWAMP implementation for IPv6 wireless networks on Android platform and indigenous driver development to obtain send/receive timestamps of packets, is proposed, to obtain metrics namely Round-trip delay, Two-way packet Loss, Jitter, Packet Reordering, Packet Duplication and Loss-patterns etc. Analysis of aforementioned metrics indicate QoS of the wireless network under concern and give hints to applications of varying QoS profiles like VOIP, video streaming, etc. to be run at that instant of time or not.",
"title": ""
},
{
"docid": "42f7b11d84110d124a23cdd34545bb93",
"text": "Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem. Then, based on our tagging scheme, we study different end-toend models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What’s more, the end-to-end model proposed in this paper, achieves the best results on the public dataset.",
"title": ""
},
{
"docid": "065c12155991b38d36ec1e71cff60ce4",
"text": "The purpose of this chapter is to introduce, analyze, and compare the models of wheeled mobile robots (WMR) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. According to this discussion it is shown that, whatever the number and the types of the wheels, all WMR belong to only five generic classes. Different types of models are derived and compared: the posture model versus the configuration model, the kinematic model versus the dynamic model. The structural properties of these models are discussed and compared. These models as well as their properties constitute the background necessary for model-based control design. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robots and articulated robots realizations are described in more detail.",
"title": ""
}
] |
scidocsrr
|
81f68ffb6fe836778fdf8c09540067e8
|
Personality Measurement and Faking : An Integrative Framework Asl › Göncü Çankaya Üniversitesi
|
[
{
"docid": "ada320bb2747d539ff6322bbd46bd9f0",
"text": "Real job applicants completed a 5-factor model personality measure as part of the job application process. They were rejected; 6 months later they (n = 5,266) reapplied for the same job and completed the same personality measure. Results indicated that 5.2% or fewer improved their scores on any scale on the 2nd occasion; moreover, scale scores were as likely to change in the negative direction as the positive. Only 3 applicants changed scores on all 5 scales beyond a 95% confidence threshold. Construct validity of the personality scales remained intact across the 2 administrations, and the same structural model provided an acceptable fit to the scale score matrix on both occasions. For the small number of applicants whose scores changed beyond the standard error of measurement, the authors found the changes were systematic and predictable using measures of social skill, social desirability, and integrity. Results suggest that faking on personality measures is not a significant problem in real-world selection settings.",
"title": ""
}
] |
[
{
"docid": "2b4b639973f54bdd7b987d5bc9bb3978",
"text": "Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we propose a simple space aggregation that hugely simplifies the correlation learning problem. Our results on benchmark data are compelling and show promising potential even without refining the solution.",
"title": ""
},
{
"docid": "76d4ed8e7692ca88c6b5a70c9954c0bd",
"text": "Custom-tailored products are meant by the products having various sizes and shapes to meet the customer’s different tastes or needs. Thus fabrication of custom-tailored products inherently involves inefficiency. Custom-tailoring shoes are not an exception because corresponding shoe-lasts must be custom-ordered. It would be nice if many template shoe-lasts had been cast in advance, the most similar template was identified automatically from the custom-ordered shoe-last, and only the different portions in the template shoe-last could be machined. To enable this idea, the first step is to derive the geometric models of template shoe-lasts to be cast. Template shoe-lasts can be derived by grouping all the various existing shoe-lasts into manageable number of groups and by uniting all the shoe-lasts in each group such that each template shoe-last for each group barely encloses all the shoe-lasts in the group. For grouping similar shoe-lasts into respective groups, similarity between shoe-lasts should be quantized. Similarity comparison starts with the determination of the closest pose between two shapes in consideration. The closest pose is derived by comparing the ray distances while one shape is virtually rotated with respect to the other. Shape similarity value and overall similarity value calculated from ray distances are also used for grouping. A prototype system based on the proposed methodology has been implemented and applied to grouping of the shoe-lasts of various shapes and sizes and deriving template shoe-lasts. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6afcc3c2e0c67823348cf89a0dfec9db",
"text": "BACKGROUND\nThe consumption of dietary protein is important for resistance-trained individuals. It has been posited that intakes of 1.4 to 2.0 g/kg/day are needed for physically active individuals. Thus, the purpose of this investigation was to determine the effects of a very high protein diet (4.4 g/kg/d) on body composition in resistance-trained men and women.\n\n\nMETHODS\nThirty healthy resistance-trained individuals participated in this study (mean ± SD; age: 24.1 ± 5.6 yr; height: 171.4 ± 8.8 cm; weight: 73.3 ± 11.5 kg). Subjects were randomly assigned to one of the following groups: Control (CON) or high protein (HP). The CON group was instructed to maintain the same training and dietary habits over the course of the 8 week study. The HP group was instructed to consume 4.4 grams of protein per kg body weight daily. They were also instructed to maintain the same training and dietary habits (e.g. maintain the same fat and carbohydrate intake). Body composition (Bod Pod®), training volume (i.e. volume load), and food intake were determined at baseline and over the 8 week treatment period.\n\n\nRESULTS\nThe HP group consumed significantly more protein and calories pre vs post (p < 0.05). Furthermore, the HP group consumed significantly more protein and calories than the CON (p < 0.05). The HP group consumed on average 307 ± 69 grams of protein compared to 138 ± 42 in the CON. When expressed per unit body weight, the HP group consumed 4.4 ± 0.8 g/kg/d of protein versus 1.8 ± 0.4 g/kg/d in the CON. There were no changes in training volume for either group. Moreover, there were no significant changes over time or between groups for body weight, fat mass, fat free mass, or percent body fat.\n\n\nCONCLUSIONS\nConsuming 5.5 times the recommended daily allowance of protein has no effect on body composition in resistance-trained individuals who otherwise maintain the same training regimen. This is the first interventional study to demonstrate that consuming a hypercaloric high protein diet does not result in an increase in body fat.",
"title": ""
},
{
"docid": "bcae6eb2ad3a379f889ec9fea12d203b",
"text": "Within the last few decades inkjet printing has grown into a mature noncontact patterning method, since it can produce large-area patterns with high resolution at relatively high speeds while using only small amounts of functional materials. The main fields of interest where inkjet printing can be applied include the manufacturing of radiofrequency identification (RFID) tags, organic thin-film transistors (OTFTs), and electrochromic devices (ECDs), and are focused on the future of plastic electronics. In view of these applications on polymer foils, micrometersized conductive features on flexible substrates are essential. To fabricate conductive features onto polymer substrates, solutionprocessable materials are often used. The most frequently used are dispersions of silver nanoparticles in an organic solvent. Inks of silver nanoparticle dispersions are relatively easy to prepare and, moreover, silver has the lowest resistivity of all metals (1.59mV cm). After printing and evaporation of the solvent, the particles require a thermal-processing step to render the features conductive by removing the organic binder that is present around the nanoparticles. In nonpolar solvents, long alkyl chains with a polar head, like thiols or carboxylic acids, are usually used to stabilize the nanoparticles. Steric stabilization of these particles in nonpolar solvents substantially screens van der Waals attractions and introduces steep steric repulsion between the particles at contact, which avoids agglomeration. In addition, organic binders are often added to the ink to assure not only mechanical integrity and adhesion to the substrate, but also to promote the printability of the ink. Nanoparticles with a diameter below 50 nmhave a significantly reduced sintering temperature, typically between 160 and 300 8C, which is well below the melting temperature of the bulk material (Tm1⁄4 963 8C). Despite these low sintering temperatures conventional heating methods are still not compatible with common polymer foils, such as polycarbonate (PC) and polyethylene terephthalate (PET), due to their low glass-transition temperatures (Tg). In fact, only the expensive high-performance polymers, like polytetrafluoroethylene (PTFE), poly(ether ether ketone) (PEEK), and polyimide (PI) can be used at these temperatures. This represents, however, a significant drawback for the implementation in a large-area production of plastic electronics, being unfavorable in terms of costs. Furthermore, the long sintering time of 60min or more that is generally required to create conductive features also obstructs industrial implementation. Therefore, other techniques have to be used in order to facilitate fast and selective heating of materials. One selective technique for nanoparticle sintering that has been described in literature is based on an argon-ion laser beam that follows the as-printed feature and selectively sinters the central region. Features with a line width smaller than 10mm have been created with this technique. However, the large overall thermal energy impact together with the low writing speed of 0.2mm s 1 of the translational stage are limiting factors. A faster alternative to selectively heat silver nanoparticles is to use microwave radiation. Ceramics and other dielectric materials can be heated by microwaves due to dielectric losses that are caused by dipole polarization. Under ambient conditions, however, metals behave as reflectors for microwave radiation, because of their small skin depth, which is defined as the distance at which the incident power is reduced to half of its initial value. The small skin depth results from the high conductance s and the high dielectric loss factor e00 together with a small capacitance. When instead of bulk material, the metal consists of particles and/or is heated to at least 400 8C, the materials absorbs microwave radiation to a greater extent. It is believed that the conductive particle interaction with microwave radiation, i.e., inductive coupling, is mainly based on Maxwell–Wagner polarization, which results from the accumulation of charge at the materials interfaces, electric conduction, and eddy currents. However, the main reasons for successful heating of metallic particles through microwave radiation are not yet fully understood. In contrast to the relatively strongmicrowave absorption by the conductive particles, the polarization of dipoles in thermoplastic polymers below the Tg is limited, which makes the polymer foil’s skin depth almost infinite, hence transparent, to microwave radiation. Therefore, only the conductive particles absorb the microwaves and can be sintered selectively. Recently, it has been shown that it is possible to create conductive printed features with microwave radiation within 3–4min. The resulting conductivity, however, is only approximately 5% of the bulk silver value. In this contribution, we present a study on antenna-supported microwave sintering of conducted features on polymer foils. We",
"title": ""
},
{
"docid": "e66fb8ed9e26b058a419d34d9c015a4c",
"text": "Children and adolescents now communicate online to form and/or maintain relationships with friends, family, and strangers. Relationships in \"real life\" are important for children's and adolescents' psychosocial development; however, they can be difficult for those who experience feelings of loneliness and/or social anxiety. The aim of this study was to investigate differences in usage of online communication patterns between children and adolescents with and without self-reported loneliness and social anxiety. Six hundred twenty-six students ages 10 to 16 years completed a survey on the amount of time they spent communicating online, the topics they discussed, the partners they engaged with, and their purposes for communicating over the Internet. Participants were administered a shortened version of the UCLA Loneliness Scale and an abbreviated subscale of the Social Anxiety Scale for Adolescents (SAS-A). Additionally, age and gender differences in usage of the online communication patterns were examined across the entire sample. Findings revealed that children and adolescents who self-reported being lonely communicated online significantly more frequently about personal and intimate topics than did those who did not self-report being lonely. The former were motivated to use online communication significantly more frequently to compensate for their weaker social skills to meet new people. Results suggest that Internet usage allows them to fulfill critical needs of social interactions, self-disclosure, and identity exploration. Future research, however, should explore whether or not the benefits derived from online communication may also facilitate lonely children's and adolescents' offline social relationships.",
"title": ""
},
{
"docid": "fe48a551dfbe397b7bcf52e534dfcf00",
"text": "This meta-analysis of 12 dependent variables from 9 quantitative studies comparing music to no-music conditions during treatment of children and adolescents with autism resulted in an overall effect size of d =.77 and a mean weighted correlation of r =.36 (p =.00). Since the confidence interval did not include 0, results were considered to be significant. All effects were in a positive direction, indicating benefits of the use of music in intervention. The homogeneity Q value was not significant (p =.83); therefore, results of included studies are considered to be homogeneous and explained by the overall effect size. The significant effect size, combined with the homogeneity of the studies, leads to the conclusion that all music intervention, regardless of purpose or implementation, has been effective for children and adolescents with autism. Included studies are described in terms of type of dependent variables measured; theoretical approach; number of subjects in treatment sessions; participation in and use, selection, and presentation of music; researcher discipline; published or unpublished source; and subject age. Clinical implications as well as recommendations for future research are discussed.",
"title": ""
},
{
"docid": "9798859ddb2d29fa461dab938c5183bb",
"text": "The emergence of the extended manufacturing enterprise, a globally dispersed collection of strategically aligned organizations, has brought new attention to how organizations coordinate the flow of information and materials across their w supply chains. This paper explores and develops the concept of enterprise logistics Greis, N.P., Kasarda, J.D., 1997. Ž . x Enterprise logistics in the information age. California Management Review 39 3 , 55–78 as a tool for integrating the logistics activities both within and between the strategically aligned organizations of the extended enterprise. Specifically, this paper examines the fit between an organization’s enterprise logistics integration capabilities and its supply chain structure. Using a configurations approach, we test whether globally dispersed network organizations that adopt enterprise logistics practices are able to achieve higher levels of organizational performance. Results indicate that enterprise logistics is a necessary tool for the coordination of supply chain operations that are geographically dispersed around the world. However, for a pure network structure, a high level of enterprise logistics integration alone does not guarantee improved organizational performance. The paper ends with a discussion of managerial implications and directions for future research. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "111b5bfb34a76b0ea78a0fd58311d31f",
"text": "Wireless micro sensor networks have been identified as one of the most important technologies for the 21st century. This paper traces the history of research in sensor networks over the past three decades, including two important programs of the Defense Advanced Research Projects Agency (DARPA) spanning this period: the Distributed Sensor Networks (DSN) and the Sensor Information Technology (SensIT) programs. Technology trends that impact the development of sensor networks are reviewed and new applications such as infrastructure security, habitat monitoring, and traffic control are presented. Technical challenges in sensor network development include network discovery, control and routing, collaborative signal and information processing, tasking and querying, and security. The paper concludes by presenting some recent research results in sensor network algorithms, including localized algorithms and directed diffusion, distributed tracking in wireless ad hoc networks, and distributed classification using local agents. Keywords— Collaborative signal processing, micro sensors, net-work routing and control, querying and tasking, sensor networks, tracking and classification, wireless networks.",
"title": ""
},
{
"docid": "565f815ef0c1dd5107f053ad39dade20",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "1b777ff8e7c30c23e7cc827ec3aee0bc",
"text": "The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Recent work has shown state-of-the-art results by partitioning the pose space and using strong nonlinear classifiers such that the pose dependence and multi-modal nature of body part appearance can be captured. We propose to extend these methods to handle much larger quantities of training data, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a significant increase in pose estimation accuracy, while simultaneously reducing computational expense by a factor of 10, and contribute a dataset of 10,000 highly articulated poses.",
"title": ""
},
{
"docid": "8f0276f7a902fa02b6236dfc76b882d2",
"text": "Support Vector Machines (SVMs) have successfully shown efficiencies in many areas such as text categorization. Although recommendation systems share many similarities with text categorization, the performance of SVMs in recommendation systems is not acceptable due to the sparsity of the user-item matrix. In this paper, we propose a heuristic method to improve the predictive accuracy of SVMs by repeatedly correcting the missing values in the user-item matrix. The performance comparison to other algorithms has been conducted. The experimental studies show that the accurate rates of our heuristic method are the highest.",
"title": ""
},
{
"docid": "c71a8c9163d6bf294a5224db1ff5c6f5",
"text": "BACKGROUND\nOsteosarcoma is the second most common primary tumor of the skeletal system and the most common primary bone tumor. Usually occurring at the metaphysis of long bones, osteosarcomas are highly aggressive lesions that comprise osteoid-producing spindle cells. Craniofacial osteosarcomas comprise <8% and are believed to be less aggressive and lower grade. Primary osteosarcomas of the skull and skull base comprise <2% of all skull tumors. Osteosarcomas originating from the clivus are rare. We present a case of a primar, high-grade clival osteosarcoma.\n\n\nCASE DESCRIPTION\nA 29-year-old man presented to our institution with a progressively worsening right frontal headache for 3 weeks. There were no sensory or cranial nerve deficits. Computed tomography revealed a destructive mass involving the clivus with extension into the left sphenoid sinus. Magnetic resonance imaging revealed a homogenously enhancing lesion measuring 2.7 × 2.5 × 3.2 cm. The patient underwent endonasal transphenoidal surgery for gross total resection. The histopathologic analysis revealed proliferation of malignant-appearing spindled and epithelioid cells with associated osteoclast-like giant cells and a small area of osteoid production. The analysis was consistent with high-grade osteosarcoma. The patient did well and was discharged on postoperative day 2. He was referred for adjuvant radiation therapy and chemotherapy. Two-year follow-up showed postoperative changes and clival expansion caused by packing material.\n\n\nCONCLUSIONS\nOsteosarcoma is a highly malignant neoplasm. These lesions are usually found in the extremities; however, they may rarely present in the craniofacial region. Clival osteosarcomas are relatively infrequent. We present a case of a primary clival osteosarcoma with high-grade pathology.",
"title": ""
},
{
"docid": "9c534d53d6c52a1559a401b8d2fc9bac",
"text": "The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.",
"title": ""
},
{
"docid": "25eea5205d1f8beaa8c4a857da5714bc",
"text": "To backpropagate the gradients through discrete stochastic layers, we encode the true gradients into a multiplication between random noises and the difference of the same function of two different sets of discrete latent variables, which are correlated with these random noises. The expectations of that multiplication over iterations are zeros combined with spikes from time to time. To modulate the frequencies, amplitudes, and signs of the spikes to capture the temporal evolution of the true gradients, we propose the augment-REINFORCE-merge (ARM) estimator that combines data augmentation, the score-function estimator, permutation of the indices of latent variables, and variance reduction for Monte Carlo integration using common random numbers. The ARM estimator provides low-variance and unbiased gradient estimates for the parameters of discrete distributions, leading to state-of-the-art performance in both auto-encoding variational Bayes and maximum likelihood inference, for discrete latent variable models with one or multiple discrete stochastic layers.",
"title": ""
},
{
"docid": "0a17722ba7fbeda51784cdd699f54b3f",
"text": "One of the greatest challenges food research is facing in this century lies in maintaining sustainable food production and at the same time delivering high quality food products with an added functionality to prevent life-style related diseases such as, cancer, obesity, diabetes, heart disease, stroke. Functional foods that contain bioactive components may provide desirable health benefits beyond basic nutrition and play important roles in the prevention of life-style related diseases. Polyphenols and carotenoids are plant secondary metabolites which are well recognized as natural antioxidants linked to the reduction of the development and progression of life-style related diseases. This chapter focuses on healthpromoting food ingredients (polyphenols and carotenoids), food structure and functionality, and bioavailability of these bioactive ingredients, with examples on their commercial applications, namely on functional foods. Thereafter, in order to support successful development of health-promoting food ingredients, this chapter contributes to an understanding of the relationship between food structures, ingredient functionality, in relation to the breakdown of food structures in the gastrointestinal tract and its impact on the bioavailability of bioactive ingredients. The overview on food processing techniques and the processing of functional foods given here will elaborate novel delivery systems for functional food ingredients and their applications in food. Finally, this chapter concludes with microencapsulation techniques and examples of encapsulation of polyphenols and carotenoids; the physical structure of microencapsulated food ingredients and their impacts on food sensorial properties; yielding an outline on the controlled release of encapsulated bioactive compounds in food products.",
"title": ""
},
{
"docid": "8d20b2a4d205684f6353fe710f989fde",
"text": "Financial institutions manage numerous portfolios whose risk must be managed continuously, and the large amounts of data that has to be processed renders this a considerable effort. As such, a system that autonomously detects anomalies in the risk measures of financial portfolios, would be of great value. To this end, the two econometric models ARMA-GARCH and EWMA, and the two machine learning based algorithms LSTM and HTM, were evaluated for the task of performing unsupervised anomaly detection on the streaming time series of portfolio risk measures. Three datasets of returns and Value-at-Risk series were synthesized and one dataset of real-world Value-at-Risk series had labels handcrafted for the experiments in this thesis. The results revealed that the LSTM has great potential in this domain, due to an ability to adapt to different types of time series and for being effective at finding a wide range of anomalies. However, the EWMA had the benefit of being faster and more interpretable, but lacked the ability to capture anomalous trends. The ARMA-GARCH was found to have difficulties in finding a good fit to the time series of risk measures, resulting in poor performance, and the HTM was outperformed by the other algorithms in every regard, due to an inability to learn the autoregressive behaviour of the time series.",
"title": ""
},
{
"docid": "ed08e93061f2d248f6b70fde6e17b431",
"text": "With the rapid growth of e-commerce, the B2C of e-commerce has been a significant issue. The purpose of this study aims to predict consumers’ purchase intentions by integrating trust and perceived risk into the model to empirically examine the impact of key variables. 705 samples were obtained from online users purchasing from e-vendor of Yahoo! Kimo. This study applied the Structural Equation Model to examine consumers’ online shopping based on the Technology Acceptance Model (TAM). The results indicate that perceived ease of use (PEOU), perceived usefulness (PU), trust, and perceived risk significantly impact purchase intentions both directly and indirectly. Moreover, trust significantly reduced online consumer perceived risk during online shopping. This study provides evidence of the relationship between consumers’ purchase intention, perceived trust and perceived risk to websites of specific e-vendors. Such knowledge may help to inform promotion, designing, and advertising website strategies employed by practitioners.",
"title": ""
},
{
"docid": "7f420ef711e271be98c5acd427a2be57",
"text": "The Purchasing Power Parity Debate* Originally propounded by the 16th-century scholars of the University of Salamanca, the concept of purchasing power parity (PPP) was revived in the interwar period in the context of the debate concerning the appropriate level at which to re-establish international exchange rate parities. Broadly accepted as a long-run equilibrium condition in the post-war period, it first was advocated as a short-run equilibrium by many international economists in the first few years following the breakdown of the Bretton Woods system in the early 1970s and then increasingly came under attack on both theoretical and empirical grounds from the late 1970s to the mid 1990s. Accordingly, over the last three decades, a large literature has built up that examines how much the data deviated from theory, and the fruits of this research have provided a deeper understanding of how well PPP applies in both the short run and the long run. Since the mid 1990s, larger datasets and nonlinear econometric methods, in particular, have improved estimation. As deviations narrowed between real exchange rates and PPP, so did the gap narrow between theory and data, and some degree of confidence in long-run PPP began to emerge again. In this respect, the idea of long-run PPP now enjoys perhaps its strongest support in more than 30 years, a distinct reversion in economic thought. JEL Classification: F31 and F41",
"title": ""
},
{
"docid": "92716e900851c637fb60da359caf09a0",
"text": "Litz wire uses complex twisting to balance currents between strands. Most models assume that the twisting works perfectly to accomplish this balancing, and thus are not helpful in choosing the details of the twisting configuration. A complete model that explicitly models the effect of twisting on loss is introduced. Skin effect and proximity effect are each modeled at the level of the individual strands and at each level of the twisting construction. Comparisons with numerical simulations are used to verify the model. The results are useful for making design choices for the twisting configuration and the pitches of each twisting step. Windings with small numbers of turns are the most likely to have significant bundle-level effects that are not captured by conventional models, and are the most important to model and optimize with this approach.",
"title": ""
},
{
"docid": "e055fe2b1f2be90f58828da4cff78c78",
"text": "Probabilistic topic models, which aim to discover latent topics in text corpora define each document as a multinomial distributions over topics and each topic as a multinomial distributions over words. Although, humans can infer a proper label for each topic by looking at top representative words of the topic but, it is not applicable for machines. Automatic Topic Labeling techniques try to address the problem. The ultimate goal of topic labeling techniques are to assign interpretable labels for the learned topics. In this paper, we are taking concepts of ontology into consideration instead of words alone to improve the quality of generated labels for each topic. Our work is different in comparison with the previous efforts in this area, where topics are usually represented with a batch of selected words from topics. We have highlighted some aspects of our approach including: 1) we have incorporated ontology concepts with statistical topic modeling in a unified framework, where each topic is a multinomial probability distribution over the concepts and each concept is represented as a distribution over words; and 2) a topic labeling model according to the meaning of the concepts of the ontology included in the learned topics. The best topic labels are selected with respect to the semantic similarity of the concepts and their ontological categorizations. We demonstrate the effectiveness of considering ontological concepts as richer aspects between topics and words by comprehensive experiments on two different data sets. In another word, representing topics via ontological concepts shows an effective way for generating descriptive and representative labels for the discovered topics. Keywords—Topic modeling; topic labeling; statistical learning; ontologies; linked open data",
"title": ""
}
] |
scidocsrr
|
c0114e72609cd1e0f502e9bcc33c614e
|
Survey on Classification Algorithms for Data Mining:(Comparison and Evaluation)
|
[
{
"docid": "27fd27cf86b68822b3cfb73cff2e2cb6",
"text": "Patients with Liver disease have been continuously increasing because of excessive consumption of alcohol, inhale of harmful gases, intake of contaminated food, pickles and drugs. Automatic classification tools may reduce burden on doctors. This paper evaluates the selected classification algorithms for the classification of some liver patient datasets. The classification algorithms considered here are Naïve Bayes classifier, C4.5, Back propagation Neural Network algorithm, and Support Vector Machines. These algorithms are evaluated based on four criteria: Accuracy, Precision, Sensitivity and Specificity.",
"title": ""
},
{
"docid": "8d9a02974ad85aa508dc0f7a85a669f1",
"text": "The successful application of data mining in highly visible fields like e-business, marketing and retail has led to its application in other industries and sectors. Among these sectors just discovering is healthcare. The healthcare environment is still „information rich‟ but „knowledge poor‟. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. This research paper intends to provide a survey of current techniques of knowledge discovery in databases using data mining techniques that are in use in today‟s medical research particularly in Heart Disease Prediction. Number of experiment has been conducted to compare the performance of predictive data mining technique on the same dataset and the outcome reveals that Decision Tree outperforms and some time Bayesian classification is having similar accuracy as of decision tree but other predictive methods like KNN, Neural Networks, Classification based on clustering are not performing well. The second conclusion is that the accuracy of the Decision Tree and Bayesian Classification further improves after applying genetic algorithm to reduce the actual data size to get the optimal subset of attribute sufficient for heart disease prediction.",
"title": ""
}
] |
[
{
"docid": "c1a4da111d6e3496845b4726dfabcb5b",
"text": "A growing number of information technology systems and services are being developed to change users’ attitudes or behavior or both. Despite the fact that attitudinal theories from social psychology have been quite extensively applied to the study of user intentions and behavior, these theories have been developed for predicting user acceptance of the information technology rather than for providing systematic analysis and design methods for developing persuasive software solutions. This article is conceptual and theory-creating by its nature, suggesting a framework for Persuasive Systems Design (PSD). It discusses the process of designing and evaluating persuasive systems and describes what kind of content and software functionality may be found in the final product. It also highlights seven underlying postulates behind persuasive systems and ways to analyze the persuasion context (the intent, the event, and the strategy). The article further lists 28 design principles for persuasive system content and functionality, describing example software requirements and implementations. Some of the design principles are novel. Moreover, a new categorization of these principles is proposed, consisting of the primary task, dialogue, system credibility, and social support categories.",
"title": ""
},
{
"docid": "4d389e4f6e33d9f5498e3071bf116a49",
"text": "This paper reviews the origins and definitions of social capital in the writings of Bourdieu, Loury, and Coleman, among other authors. It distinguishes four sources of social capital and examines their dynamics. Applications of the concept in the sociological literature emphasize its role in social control, in family support, and in benefits mediated by extrafamilial networks. I provide examples of each of these positive functions. Negative consequences of the same processes also deserve attention for a balanced picture of the forces at play. I review four such consequences and illustrate them with relevant examples. Recent writings on social capital have extended the concept from an individual asset to a feature of communities and even nations. The final sections describe this conceptual stretch and examine its limitations. I argue that, as shorthand for the positive consequences of sociability, social capital has a definite place in sociological theory. However, excessive extensions of the concept may jeopardize its heuristic value. Alejandro Portes: Biographical Sketch Alejandro Portes is professor of sociology at Princeton University and faculty associate of the Woodrow Wilson School of Public Affairs. He formerly taught at Johns Hopkins where he held the John Dewey Chair in Arts and Sciences, Duke University, and the University of Texas-Austin. In 1997 he held the Emilio Bacardi distinguished professorship at the University of Miami. In the same year he was elected president of the American Sociological Association. Born in Havana, Cuba, he came to the United States in 1960. He was educated at the University of Havana, Catholic University of Argentina, and Creighton University. He received his MA and PhD from the University of Wisconsin-Madison. 0360-0572/98/0815-0001$08.00 1 A nn u. R ev . S oc io l. 19 98 .2 4: 124 . D ow nl oa de d fr om w w w .a nn ua lr ev ie w s. or g A cc es s pr ov id ed b y St an fo rd U ni ve rs ity M ai n C am pu s R ob er t C ro w n L aw L ib ra ry o n 03 /1 0/ 17 . F or p er so na l u se o nl y. Portes is the author of some 200 articles and chapters on national development, international migration, Latin American and Caribbean urbanization, and economic sociology. His most recent books include City on the Edge, the Transformation of Miami (winner of the Robert Park award for best book in urban sociology and of the Anthony Leeds award for best book in urban anthropology in 1995); The New Second Generation (Russell Sage Foundation 1996); Caribbean Cities (Johns Hopkins University Press); and Immigrant America, a Portrait. The latter book was designated as a centennial publication by the University of California Press. It was originally published in 1990; the second edition, updated and containing new chapters on American immigration policy and the new second generation, was published in 1996.",
"title": ""
},
{
"docid": "18faba65741b6871517c8050aa6f3a45",
"text": "Individuals differ in the manner they approach decision making, namely their decision-making styles. While some people typically make all decisions fast and without hesitation, others invest more effort into deciding even about small things and evaluate their decisions with much more scrutiny. The goal of the present study was to explore the relationship between decision-making styles, perfectionism and emotional processing in more detail. Specifically, 300 college students majoring in social studies and humanities completed instruments designed for assessing maximizing, decision commitment, perfectionism, as well as emotional regulation and control. The obtained results indicate that maximizing is primarily related to one dimension of perfectionism, namely the concern over mistakes and doubts, as well as emotional regulation and control. Furthermore, together with the concern over mistakes and doubts, maximizing was revealed as a significant predictor of individuals' decision commitment. The obtained findings extend previous reports regarding the association between maximizing and perfectionism and provide relevant insights into their relationship with emotional regulation and control. They also suggest a need to further explore these constructs that are, despite their complex interdependence, typically investigated in separate contexts and domains.",
"title": ""
},
{
"docid": "fb6068d738c7865d07999052750ff6a8",
"text": "Malware detection and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. The traditional signature based detection of malware fails for metamorphic malware which changes its code structurally while maintaining functionality at time of propagation. This category of malware is called metamorphic malware. In this paper we dynamically analyze the executables produced from various metamorphic generators through an emulator by tracing API calls. A signature is generated for an entire malware class (each class representing a family of viruses generated from one metamorphic generator) instead of for individual malware sample. We show that most of the metamorphic viruses of same family are detected by the same base signature. Once a base signature for a particular metamorphic generator is generated, all the metamorphic viruses created from that tool are easily detected by the proposed method. A Proximity Index between the various Metamorphic generators has been proposed to determine how similar two or more generators are.",
"title": ""
},
{
"docid": "66e7979aff5860f713dffd10e98eed3d",
"text": "The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation.1",
"title": ""
},
{
"docid": "215aa5e4d0837fe56179de182b1613e0",
"text": "Today Security of data is of foremost importance in today's world. Security has become one of the most important factor in communication and information technology. For this purpose steganography is used. Steganography is the art of hiding secret or sensitive information into digital media like images so as to have secure communication. In this paper we present and discuss LSB (Least Significant Bit) based image steganography and AES",
"title": ""
},
{
"docid": "3af344724ba7a3966968d035727ad705",
"text": "We prove a simple relationship between extended binomial coefficients — natural extensions of the well-known binomial coefficients — and weighted restricted integer compositions. Moreover, we give a very useful interpretation of extended binomial coefficients as representing distributions of sums of independent discrete random variables. We apply our results, e.g., to determine the distribution of the sum of k logarithmically distributed random variables, and to determining the distribution, specifying all moments, of the random variable whose values are part-products of random restricted integer compositions. Based on our findings and using the central limit theorem, we also give generalized Stirling formulae for central extended binomial coefficients. We enlarge the list of known properties of extended binomial coefficients.",
"title": ""
},
{
"docid": "d7c7eaae670910f78038e439b1553032",
"text": "Wireless powered communication networks (WPCNs), where multiple energy-limited devices first harvest energy in the downlink and then transmit information in the uplink, have been envisioned as a promising solution for the future Internet-of-Things (IoT). Meanwhile, nonorthogonal multiple access (NOMA) has been proposed to improve the system spectral efficiency (SE) of the fifth-generation (5G) networks by allowing concurrent transmissions of multiple users in the same spectrum. As such, NOMA has been recently considered for the uplink of WPCNs based IoT networks with a massive number of devices. However, simultaneous transmissions in NOMA may also incur more transmit energy consumption as well as circuit energy consumption in practice which is critical for energy constrained IoT devices. As a result, compared to orthogonal multiple access schemes such as time-division multiple access (TDMA), whether the SE can be improved and/or the total energy consumption can be reduced with NOMA in such a scenario still remains unknown. To answer this question, we first derive the optimal time allocations for maximizing the SE of a TDMA-based WPCN (T-WPCN) and a NOMA-based WPCN (N-WPCN), respectively. Subsequently, we analyze the total energy consumption as well as the maximum SE achieved by these two networks. Surprisingly, it is found that N-WPCN not only consumes more energy, but also is less spectral efficient than T-WPCN. Simulation results verify our theoretical findings and unveil the fundamental performance bottleneck, i.e., “worst user bottleneck problem”, in multiuser NOMA systems.",
"title": ""
},
{
"docid": "2dc2b9d60244e819a85b33581800ae56",
"text": "In this study, a simple and effective silver ink formulation was developed to generate silver tracks with high electrical conductivity on flexible substrates at low sintering temperatures. Diethanolamine (DEA), a self-oxidizing compound at moderate temperatures, was mixed with a silver ammonia solution to form a clear and stable solution. After inkjet-printed or pen-written on plastic sheets, DEA in the silver ink decomposes at temperatures higher than 50 °C and generates formaldehyde, which reacts spontaneously with silver ammonia ions to form silver thin films. The electrical conductivity of the inkjet-printed silver films can be 26% of the bulk silver after heating at 75 °C for 20 min and show great adhesion on plastic sheets.",
"title": ""
},
{
"docid": "b4a5ebf335cc97db3790c9e2208e319d",
"text": "We examine whether conservative white males are more likely than are other adults in the U.S. general public to endorse climate change denial. We draw theoretical and analytical guidance from the identityprotective cognition thesis explaining the white male effect and from recent political psychology scholarship documenting the heightened system-justification tendencies of political conservatives. We utilize public opinion data from ten Gallup surveys from 2001 to 2010, focusing specifically on five indicators of climate change denial. We find that conservative white males are significantly more likely than are other Americans to endorse denialist views on all five items, and that these differences are even greater for those conservative white males who self-report understanding global warming very well. Furthermore, the results of our multivariate logistic regression models reveal that the conservative white male effect remains significant when controlling for the direct effects of political ideology, race, and gender as well as the effects of nine control variables. We thus conclude that the unique views of conservative white males contribute significantly to the high level of climate change denial in the United States. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ae5a1d9874b9fd1358d7768936c85491",
"text": "Photoplethysmography (PPG) is a technique that uses light to noninvasively obtain a volumetric measurement of an organ with each cardiac cycle. A PPG-based system emits monochromatic light through the skin and measures the fraction of the light power which is transmitted through a vascular tissue and detected by a photodetector. Part of thereby transmitted light power is modulated by the vascular tissue volume changes due to the blood circulation induced by the heart beating. This modulated light power plotted against time is called the PPG signal. Pulse Oximetry is an empirical technique which allows the arterial blood oxygen saturation (SpO2 – molar fraction) evaluation from the PPG signals. There have been many reports in the literature suggesting that other arterial blood chemical components molar fractions and concentrations can be evaluated from the PPG signals. Most attempts to perform such evaluation on empirical bases have failed, especially for components concentrations. This paper introduces a non-empirical physical model which can be used to analytically investigate the phenomena of PPG signal. Such investigation would result in simplified engineering models, which can be used to design validating experiments and new types of spectroscopic devices with the potential to assess venous and arterial blood chemical composition in both molar fractions and concentrations non-invasively.",
"title": ""
},
{
"docid": "648a1ff0ad5b2742ff54460555287c84",
"text": "In the European academic and institutional debate, interoperability is predominantly seen as a means to enable public administrations to collaborate within Members State and across borders. The article presents a conceptual framework for ICT-enabled governance and analyses the role of interoperability in this regard. The article makes a specific reference to the exploratory research project carried out by the Information Society Unit of the Institute for Prospective Technological Studies (IPTS) of the European Commission’s Joint Research Centre on emerging ICT-enabled governance models in EU cities (EXPGOV). The aim of this project is to study the interplay between ICTs and governance processes at city level and formulate an interdisciplinary framework to assess the various dynamics emerging from the application of ICT-enabled service innovations in European cities. In this regard, the conceptual framework proposed in this article results from an action research perspective and investigation of e-governance experiences carried out in Europe. It aims to elicit the main value drivers that should orient how interoperable systems are implemented, considering the reciprocal influences that occur between these systems and different governance models in their specific context.",
"title": ""
},
{
"docid": "7716fcbb39961666483835e4db1da5b4",
"text": "Software development is a knowledge intensive and collaborative activity. The success of the project totally depends on knowledge and experience of the developers. Increasing knowledge creation and sharing among software engineers are uphill tasks in software development environments. The field of knowledge management has emerged into this field to improve the productivity of the software by effective and efficient knowledge creation, sharing and transferring. In other words, knowledge management for software engineering aims at facilitating knowledge flow and utilization across every phases of a software engineering process. Therefore, adaptation of various knowledge management practices by software engineering organizations is essential. This survey identified the knowledge management involvement in software engineering in different perspectives in the recent literature and guide future research in this area.",
"title": ""
},
{
"docid": "543b79408c3b66476efc66f3a29d1fb0",
"text": "Because of polysemy, distant labeling for information extraction leads to noisy training data. We describe a procedure for reducing this noise by using label propagation on a graph in which the nodes are entity mentions, and mentions are coupled when they occur in coordinate list structures. We show that this labeling approach leads to good performance even when off-the-shelf classifiers are used on the distantly-labeled data.",
"title": ""
},
{
"docid": "b78f935622b143bbbcaff580ba42e35d",
"text": "A churn is defined as the loss of a user in an online social network (OSN). Detecting and analyzing user churn at an early stage helps to provide timely delivery of retention solutions (e.g., interventions, customized services, and better user interfaces) that are useful for preventing users from churning. In this paper we develop a prediction model based on a clustering scheme to analyze the potential churn of users. In the experiment, we test our approach on a real-name OSN which contains data from 77,448 users. A set of 24 attributes is extracted from the data. A decision tree classifier is used to predict churn and non-churn users of the future month. In addition, k-means algorithm is employed to cluster the actual churn users into different groups with different online social networking behaviors. Results show that the churn and nonchurn prediction accuracies of ∼65% and ∼77% are achieved respectively. Furthermore, the actual churn users are grouped into five clusters with distinguished OSN activities and some suggestions of retaining these users are provided.",
"title": ""
},
{
"docid": "b6c81766443ec1518b7d4d044a86e23d",
"text": "Infusion is part of treatment to get drugs or vitamins into body. This is efficient way to accelerate treatment because it faster while absorbed in body and can avoid impact on digestion. If the dosage given does not match or fluids into the body getting too much, it causing disruption to the patient's health. The main objective of this paper is to provide information on the speed and volume of the infusion that being used by each patient using a photodiode sensor and node.js server can distinguish each incoming data by utilizing topic features on MQTT. Topic feature used to exchange data using ESP8266 identity and the data being sent is the volume and velocity of the infusion. Topics, one of features on MQTT, can be used to manage the data from multiple infusion into the server. Additionally, the system provides warning information of the residual volume and velocity limit when the infusion rate exceeds the normal limit that has been specified by the user.",
"title": ""
},
{
"docid": "80ce0f83ea565a1fb2b80156a3515288",
"text": "Given an image of a street scene in a city, this paper develops a new method that can quickly and precisely pinpoint at which location (as well as viewing direction) the image was taken, against a pre-stored large-scale 3D point-cloud map of the city. We adopt the recently developed 2D-3D direct feature matching framework for this task [23,31,32,42–44]. This is a challenging task especially for large-scale problems. As the map size grows bigger, many 3D points in the wider geographical area can be visually very similar–or even identical–causing severe ambiguities in 2D-3D feature matching. The key is to quickly and unambiguously find the correct matches between a query image and the large 3D map. Existing methods solve this problem mainly via comparing individual features’ visual similarities in a local and per feature manner, thus only local solutions can be found, inadequate for large-scale applications. In this paper, we introduce a global method which harnesses global contextual information exhibited both within the query image and among all the 3D points in the map. This is achieved by a novel global ranking algorithm, applied to a Markov network built upon the 3D map, which takes account of not only visual similarities between individual 2D-3D matches, but also their global compatibilities (as measured by co-visibility) among all matching pairs found in the scene. Tests on standard benchmark datasets show that our method achieved both higher precision and comparable recall, compared with the state-of-the-art.",
"title": ""
},
{
"docid": "30719d273f3966d80335db625792c3b7",
"text": "Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pretrained convnet with minimal setup. Published in the Deep Learning Workshop, 31 st International Conference on Machine Learning, Lille, France, 2015. Copyright 2015 by the author(s).",
"title": ""
},
{
"docid": "cf32d3c7f0562b3bfa2c549ba914f468",
"text": "A novel inverter-output filter, which cannot only filter the differential-mode voltage dv/dt but also suppress the common-mode voltage dv/dt and their rms values, is proposed in this paper. The filter is in combination with a conventional RLC filter and a common-mode transformer. The main advantage is that the functions of filtering a differential-mode voltage and suppressing a common-mode voltage can be integrated into a single system. Furthermore, the structure and design of the proposed filter are rather simple because only passive components are used. Simulations and experiments are conducted to validate the performance of the proposed filter. Both of their results indicate that about 80% of the rms value of the common-mode voltage are suppressed, while the demand of differential-mode voltage filtering is still met",
"title": ""
},
{
"docid": "018b25742275dd628c58208e5bd5a532",
"text": "Multivariate time series (MTS) datasets broadly exist in numerous fields, including health care, multimedia, finance, and biometrics. How to classify MTS accurately has become a hot research topic since it is an important element in many computer vision and pattern recognition applications. In this paper, we propose a Mahalanobis distance-based dynamic time warping (DTW) measure for MTS classification. The Mahalanobis distance builds an accurate relationship between each variable and its corresponding category. It is utilized to calculate the local distance between vectors in MTS. Then we use DTW to align those MTS which are out of synchronization or with different lengths. After that, how to learn an accurate Mahalanobis distance function becomes another key problem. This paper establishes a LogDet divergence-based metric learning with triplet constraint model which can learn Mahalanobis matrix with high precision and robustness. Furthermore, the proposed method is applied on nine MTS datasets selected from the University of California, Irvine machine learning repository and Robert T. Olszewski's homepage, and the results demonstrate the improved performance of the proposed approach.",
"title": ""
}
] |
scidocsrr
|
a2fa08fac825e5f1f2e3e4966f4a504a
|
A randomized, wait-list controlled clinical trial: the effect of a mindfulness meditation-based stress reduction program on mood and symptoms of stress in cancer outpatients.
|
[
{
"docid": "b5360df245a0056de81c89945f581f14",
"text": "The inability to cope successfully with the enormous stress of medical education may lead to a cascade of consequences at both a personal and professional level. The present study examined the short-term effects of an 8-week meditation-based stress reduction intervention on premedical and medical students using a well-controlled statistical design. Findings indicate that participation in the intervention can effectively (1) reduce self-reported state and trait anxiety, (2) reduce reports of overall psychological distress including depression, (3) increase scores on overall empathy levels, and (4) increase scores on a measure of spiritual experiences assessed at termination of intervention. These results (5) replicated in the wait-list control group, (6) held across different experiments, and (7) were observed during the exam period. Future research should address potential long-term effects of mindfulness training for medical and premedical students.",
"title": ""
}
] |
[
{
"docid": "7317713e6725f6541e4197cb02525cd4",
"text": "This survey describes the current state-of-the-art in the development of automated visual surveillance systems so as to provide researchers in the field with a summary of progress achieved to date and to identify areas where further research is needed. The ability to recognise objects and humans, to describe their actions and interactions from information acquired by sensors is essential for automated visual surveillance. The increasing need for intelligent visual surveillance in commercial, law enforcement and military applications makes automated visual surveillance systems one of the main current application domains in computer vision. The emphasis of this review is on discussion of the creation of intelligent distributed automated surveillance systems. The survey concludes with a discussion of possible future directions.",
"title": ""
},
{
"docid": "1e4292950f907d26b27fa79e1e8fa41f",
"text": "All over the world every business and profit earning firm want to make their consumer loyal. There are many factors responsible for this customer loyalty but two of them are prominent. This research study is focused on that how customer satisfaction and customer retention contribute towards customer loyalty. For analysis part of this study, Universities students of Peshawar Region were targeted. A sample of 120 were selected from three universities of Peshawar. These universities were Preston University, Sarhad University and City University of Science and Information technology. Analysis was conducted with the help of SPSS 19. Results of the study shows that customer loyalty is more dependent upon Customer satisfaction in comparison of customer retention. Customer perceived value and customer perceived quality are the major factors which contribute for the customer loyalty of Universities students for mobile handsets.",
"title": ""
},
{
"docid": "5a97d79641f7006d7b5d0decd3a7ad3e",
"text": "We present a cognitive model of inducing verb selectional preferences from individual verb usages. The selectional preferences for each verb argument are represented as a probability distribution over the set of semantic properties that the argument can possess—asemantic profile . The semantic profiles yield verb-specific conceptualizations of the arguments associated with a syntactic position. The proposed model can learn appropriate verb profiles from a small set of noisy training data, and can use them in simulating human plausibility judgments and analyzing implicit object alternation.",
"title": ""
},
{
"docid": "9b010450862f5b3b73273028242db8ad",
"text": "A number of mechanisms ensure that the intestine is protected from pathogens and also against our own intestinal microbiota. The outermost of these is the secreted mucus, which entraps bacteria and prevents their translocation into the tissue. Mucus contains many immunomodulatory molecules and is largely produced by the goblet cells. These cells are highly responsive to the signals they receive from the immune system and are also able to deliver antigens from the lumen to dendritic cells in the lamina propria. In this Review, we will give a basic overview of mucus, mucins and goblet cells, and explain how each of these contributes to immune regulation in the intestine.",
"title": ""
},
{
"docid": "87a319361ad48711eff002942735258f",
"text": "This paper describes an innovative principle for climbing obstacles with a two-axle and four-wheel robot with articulated frame. It is based on axle reconfiguration while ensuring permanent static stability. A simple example is demonstrated based on the OpenWHEEL platform with a serial mechanism connecting front and rear axles of the robot. A generic tridimensional multibody simulation is provided with Adams software. It permits to validate the concept and to get an approach of control laws for every type of inter-axle mechanism. This climbing principle permits to climb obstacles as high as the wheel while keeping energetic efficiency of wheel propulsion and using only one supplemental actuator. Applications to electric wheelchairs, quads and all terrain vehicles (ATV) are envisioned",
"title": ""
},
{
"docid": "f9ffe3af3a2f604efb6bde83f519f55c",
"text": "BIA is easy, non-invasive, relatively inexpensive and can be performed in almost any subject because it is portable. Part II of these ESPEN guidelines reports results for fat-free mass (FFM), body fat (BF), body cell mass (BCM), total body water (TBW), extracellular water (ECW) and intracellular water (ICW) from various studies in healthy and ill subjects. The data suggests that BIA works well in healthy subjects and in patients with stable water and electrolytes balance with a validated BIA equation that is appropriate with regard to age, sex and race. Clinical use of BIA in subjects at extremes of BMI ranges or with abnormal hydration cannot be recommended for routine assessment of patients until further validation has proven for BIA algorithm to be accurate in such conditions. Multi-frequency- and segmental-BIA may have advantages over single-frequency BIA in these conditions, but further validation is necessary. Longitudinal follow-up of body composition by BIA is possible in subjects with BMI 16-34 kg/m(2) without abnormal hydration, but must be interpreted with caution. Further validation of BIA is necessary to understand the mechanisms for the changes observed in acute illness, altered fat/lean mass ratios, extreme heights and body shape abnormalities.",
"title": ""
},
{
"docid": "10dc52289ed1ea2f9ae6a6afd7299492",
"text": "This work proposes a potentiostat circuit for multiple implantable sensor applications. Implantable sensors play a vital role in continuous in situ monitoring of biological phenomena in a real-time health care monitoring system. In the proposed work a three-electrode based electrochemical sensing system has been employed. In this system a fixed potential difference between the working and the reference electrodes is maintained using a potentiostat to generate a current signal in the counter electrode which is proportional to the concentration of the analyte. This potential difference between the working and the reference electrodes can be changed to detect different analytes. The designed low power potentiostat consumes only 66 µW with 2.5 volt power supply which is highly suitable for low-power implantable sensor applications. All the circuits are designed and fabricated in a 0.35-micron standard CMOS process.",
"title": ""
},
{
"docid": "a667360d5214a47efee3326536a95527",
"text": "In this paper we propose a method for automatic color extraction and indexing to support color queries of image and video databases. This approach identifies the regions within images that contain colors from predetermined color sets. By searching over a large number of color sets, a color index for the database is created in a fashion similar to that for file inversion. This allows very fast indexing of the image collection by color contents of the images. Furthermore, information about the identified regions, such as the color set, size, and location, enables a rich variety of queries that specify both color content and spatial relationships of regions. We present the single color extraction and indexing method and contrast it to other color approaches. We examine single and multiple color extraction and image query on a database of 3000 color images.",
"title": ""
},
{
"docid": "d5284538412222101f084fee2dc1acc4",
"text": "The hand is an integral component of the human body, with an incredible spectrum of functionality. In addition to possessing gross and fine motor capabilities essential for physical survival, the hand is fundamental to social conventions, enabling greeting, grooming, artistic expression and syntactical communication. The loss of one or both hands is, thus, a devastating experience, requiring significant psychological support and physical rehabilitation. The majority of hand amputations occur in working-age males, most commonly as a result of work-related trauma or as casualties sustained during combat. For millennia, humans have used state-of-the-art technology to design clever devices to facilitate the reintegration of hand amputees into society. The present article provides a historical overview of the progress in replacing a missing hand, from early iron hands intended primarily for use in battle, to today's standard body-powered and myoelectric prostheses, to revolutionary advancements in the restoration of sensorimotor control with targeted reinnervation and hand transplantation.",
"title": ""
},
{
"docid": "86f82b7fc89fa5132f9784296a322e8c",
"text": "The Developmental Eye Movement Test (DEM) is a standardized test for evaluating saccadic eye movements in children. An adult version, the Adult Developmental Eye Movement Test (A-DEM), was recently developed for Spanish-speaking adults ages 14 to 68. No version yet exists for adults over the age of 68 and normative studies for English-speaking adults are absent. However, it is not clear if the single-digit format of the DEM or the double-digit A-DEM format should be used for further test develop-",
"title": ""
},
{
"docid": "c4f6edd01cee1e44a00eca11a086a284",
"text": "In this paper we investigate the effectiveness of Recurrent Neural Networks (RNNs) in a top-N content-based recommendation scenario. Specifically, we propose a deep architecture which adopts Long Short Term Memory (LSTM) networks to jointly learn two embeddings representing the items to be recommended as well as the preferences of the user. Next, given such a representation, a logistic regression layer calculates the relevance score of each item for a specific user and we returns the top-N items as recommendations.\n In the experimental session we evaluated the effectiveness of our approach against several baselines: first, we compared it to other shallow models based on neural networks (as Word2Vec and Doc2Vec), next we evaluated it against state-of-the-art algorithms for collaborative filtering. In both cases, our methodology obtains a significant improvement over all the baselines, thus giving evidence of the effectiveness of deep learning techniques in content-based recommendation scenarios and paving the way for several future research directions.",
"title": ""
},
{
"docid": "30e89edb65cbf54b27115c037ee9c322",
"text": "AbstructIGBT’s are available with short-circuit withstand times approaching those of bipolar transistors. These IGBT’s can therefore be protected by the same relatively slow-acting circuitry. The more efficient IGBT’s, however, have lower shortcircuit withstand times. While protection of these types of IGBT’s is not difficult, it does require a reassessment of the traditional protection methods used for the bipolar transistors. An in-depth discussion on the behavior of IGBT’s under different short-circuit conditions is carried out and the effects of various parameters on permissible short-circuit time are analyzed. The paper also rethinks the problem of providing short-circuit protection in relation to the special characteristics of the most efficient IGBT’s. The pros and cons of some of the existing protection circuits are discussed and, based on the recommendations, a protection scheme is implemented to demonstrate that reliable short-circuit protection of these types of IGBT’s can be achieved without difficulty in a PWM motor-drive application. volts",
"title": ""
},
{
"docid": "229cdcef4b7a28b73d4bde192ad0cb53",
"text": "The problem of anomaly detection is a critical topic across application domains and is the subject of extensive research. Applications include finding frauds and intrusions, warning on robot safety, and many others. Standard approaches in this field exploit simple or complex system models, created by experts using detailed domain knowledge. In this paper, we put forth a statistics-based anomaly detector motivated by the fact that anomalies are sparse by their very nature. Powerful sparsity directed algorithms—namely Robust Principal Component Analysis and the Group Fused LASSO—form the basis of the methodology. Our novel unsupervised single-step solution imposes a convex optimisation task on the vector time series data of the monitored system by employing group-structured, switching and robust regularisation techniques. We evaluated our method on data generated by using a Baxter robot arm that was disturbed randomly by a human operator. Our procedure was able to outperform two baseline schemes in terms of F1 score. Generalisations to more complex dynamical scenarios are desired.",
"title": ""
},
{
"docid": "8925f16c563e3f7ab666efe58076ee59",
"text": "An incomplete method for solving the propositional satisfiability problem (or a general constraint satisfaction problem) is one that does not provide the guarantee that it will eventually either report a satisfying assignment or declare that the given formula is unsatisfiable. In practice, most such methods are biased towards the satisfiable side: they are typically run with a pre-set resource limit, after which they either produce a valid solution or report failure; they never declare the formula to be unsatisfiable. These are the kind of algorithms we will discuss in this chapter. In complexity theory terms, such algorithms are referred to as having one-sided error. In principle, an incomplete algorithm could instead be biased towards the unsatisfiable side, always providing proofs of unsatisfiability but failing to find solutions to some satisfiable instances, or be incomplete with respect to both satisfiable and unsatisfiable instances (and thus have two-sided error). Unlike systematic solvers often based on an exhaustive branching and backtracking search, incomplete methods are generally based on stochastic local search, sometimes referred to as SLS. On problems from a variety of domains, such incomplete methods for SAT can significantly outperform DPLL-based methods. Since the early 1990’s, there has been a tremendous amount of research on designing, understanding, and improving local search methods for SAT. There have also been attempts at hybrid approaches that explore combining ideas from DPLL methods and local search techniques [e.g. 39, 68, 84, 88]. We cannot do justice to all recent research in local search solvers for SAT, and will instead try to provide a brief overview and touch upon some interesting details. The interested reader is encouraged to further explore the area through some of the nearly a hundred publications we cite along the way. We begin the chapter by discussing two methods that played a key role in the success of local search for satisfiability, namely GSAT [98] and Walksat [95]. We will then discuss some extensions of these ideas, in particular clause weighting",
"title": ""
},
{
"docid": "f6ea3edc8116110d7591562f3c1d97ca",
"text": "Feature selection is an important task for data analysis and information retrieval processing, pattern classification systems, and data mining applications. It reduces the number of features by removing noisy, irrelevant and redundant data. In this paper, a novel feature selection algorithm based on Ant Colony Optimization (ACO), called Advanced Binary ACO (ABACO), is presented. Features are treated as graph nodes to construct a graph model and are fully connected to each other. In this graph, each node has two sub-nodes, one for selecting and the other for deselecting the feature. Ant colony algorithm is used to select nodes while ants should visit all features. The use of several statistical measures is examined as the heuristic function for visibility of the edges in the graph. At the end of a tour, each ant has a binary vector with the same length as the number of features, where 1 implies selecting and 0 implies deselecting the corresponding feature. The performance of proposed algorithm is compared to the performance of Binary Genetic Algorithm (BGA), Binary Particle Swarm Optimization (BPSO), CatfishBPSO, Improved Binary Gravitational Search Algorithm (IBGSA), and some prominent ACO-based algorithms on the task of feature selection on 12 well-known UCI datasets. Simulation results verify that the algorithm provides a suitable feature subset with good classification accuracy using a smaller feature set than competing feature selection methods. KeywordsFeature selection; Wrraper; Ant colony optimization (ACO); Binary ACO; Classification.",
"title": ""
},
{
"docid": "08d1a9f3edc449ff08b45caaaf56f6ad",
"text": "Despite the theoretical and demonstrated empirical significance of parental coping strategies for the wellbeing of families of children with disabilities, relatively little research has focused explicitly on coping in mothers and fathers of children with autism. In the present study, 89 parents of preschool children and 46 parents of school-age children completed a measure of the strategies they used to cope with the stresses of raising their child with autism. Factor analysis revealed four reliable coping dimensions: active avoidance coping, problem-focused coping, positive coping, and religious/denial coping. Further data analysis suggested gender differences on the first two of these dimensions but no reliable evidence that parental coping varied with the age of the child with autism. Associations were also found between coping strategies and parental stress and mental health. Practical implications are considered including reducing reliance on avoidance coping and increasing the use of positive coping strategies.",
"title": ""
},
{
"docid": "34641057a037740ec28581a798c96f05",
"text": "Vehicles are becoming complex software systems with many components and services that need to be coordinated. Service oriented architectures can be used in this domain to support intra-vehicle, inter-vehicles, and vehicle-environment services. Such architectures can be deployed on different platforms, using different communication and coordination paradigms. We argue that practical solutions should be hybrid: they should integrate and support interoperability of different paradigms. We demonstrate the concept by integrating Jini, the service-oriented technology we used within the vehicle, and JXTA, the peer to peer infrastructure we used to support interaction with the environment through a gateway service, called J2J. Initial experience with J2J is illustrated.",
"title": ""
},
{
"docid": "3a011bdec6531de3f0f9718f35591e52",
"text": "Since Markowitz (1952) formulated the portfolio selection problem, many researchers have developed models aggregating simultaneously several conflicting attributes such as: the return on investment, risk and liquidity. The portfolio manager generally seeks the best combination of stocks/assets that meets his/ her investment objectives. The Goal Programming (GP) model is widely applied to finance and portfolio management. The aim of this paper is to present the different variants of the GP model that have been applied to the financial portfolio selection problem from the 1970s to nowadays. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ee63ca73151e24ee6f0543b0914a3bb6",
"text": "The aim of this study was to investigate whether different aspects of morality predict traditional bullying and cyberbullying behaviour in a similar way. Students between 12 and 19 years participated in an online study. They reported on the frequency of different traditional and cyberbullying behaviours and completed self-report measures on moral emotions and moral values. A scenario approach with open questions was used to assess morally disengaged justifications. Tobit regressions indicated that a lack of moral values and a lack of remorse predicted both traditional and cyberbullying behaviour. Traditional bullying was strongly predictive for cyberbullying. A lack of moral emotions and moral values predicted cyberbullying behaviour even when controlling for traditional bUllying. Morally disengaged justifications were only predictive for traditional, but not for cyberbullying behaviour. The findings show that moral standards and moral affect are important to understand individual differences in engagement in both traditional and cyberforms of bUllying.",
"title": ""
},
{
"docid": "215b02216c68ba6eb2d040e8e01c1ac1",
"text": "Numerous companies are expecting their knowledge management (KM) to be performed effectively in order to leverage and transform the knowledge into competitive advantages. However, here raises a critical issue of how companies can better evaluate and select a favorable KM strategy prior to a successful KM implementation. The KM strategy selection is a kind of multiple criteria decision-making (MCDM) problem, which requires considering a large number of complex factors as multiple evaluation criteria. A robust MCDM method should consider the interactions among criteria. The analytic network process (ANP) is a relatively new MCDM method which can deal with all kinds of interactions systematically. Moreover, the Decision Making Trial and Evaluation Laboratory (DEMATEL) not only can convert the relations between cause and effect of criteria into a visual structural model, but also can be used as a way to handle the inner dependences within a set of criteria. Hence, this paper proposes an effective solution based on a combined ANP and DEMATEL approach to help companies that need to evaluate and select KM strategies. Additionally, an empirical study is presented to illustrate the application of the proposed method. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
8c76ce484cc5893192ff4bb375ba662e
|
Analysis of Docker Security
|
[
{
"docid": "7f06370a81e7749970cd0359c5b5f993",
"text": "The use of virtualization technologies in high performance computing (HPC) environments has traditionally been avoided due to their inherent performance overhead. However, with the rise of container-based virtualization implementations, such as Linux VServer, OpenVZ and Linux Containers (LXC), it is possible to obtain a very low overhead leading to near-native performance. In this work, we conducted a number of experiments in order to perform an in-depth performance evaluation of container-based virtualization for HPC. We also evaluated the trade-off between performance and isolation in container-based virtualization systems and compared them with Xen, which is a representative of the traditional hypervisor-based virtualization systems used today.",
"title": ""
}
] |
[
{
"docid": "19c6f2b03624f41acc5fb060bff04c64",
"text": "Estimation of binocular disparity in vision systems is typically based on a matching pipeline and rectification. Estimation of disparity in the brain, in contrast, is widely assumed to be based on the comparison of local phase information from binocular receptive fields. The classic binocular energy model shows that this requires the presence of local quadrature pairs within the eye which show phaseor position-shifts across the eyes. While numerous theoretical accounts of stereopsis have been based on these observations, there has been little work on how energy models and depth inference may emerge through learning from the statistics of image pairs. Here, we describe a probabilistic, deep learning approach to modeling disparity and a methodology for generating binocular training data to estimate model parameters. We show that within-eye quadrature filters occur as a result of fitting the model to data, and we demonstrate how a three-layer network can learn to infer depth entirely from training data. We also show how training energy models can provide depth cues that are useful for recognition. We also show that pooling over more than two filters leads to richer dependencies between the learned filters.",
"title": ""
},
{
"docid": "866fd6d60fc835080dff69f6143348fd",
"text": "In this paper we consider the problem of classifying shapes within a given category (e.g., chairs) into finer-grained classes (e.g., chairs with arms, rocking chairs, swivel chairs). We introduce a multi-label (i.e., shapes can belong to multiple classes) semi-supervised approach that takes as input a large shape collection of a given category with associated sparse and noisy labels, and outputs cleaned and complete labels for each shape. The key idea of the proposed approach is to jointly learn a distance metric for each class which captures the underlying geometric similarity within that class, e.g., the distance metric for swivel chairs evaluates the global geometric resemblance of chair bases. We show how to achieve this objective by first geometrically aligning the input shapes, and then learning the class-specific distance metrics by exploiting the feature consistency provided by this alignment. The learning objectives consider both labeled data and the mutual relations between the distance metrics. Given the learned metrics, we apply a graph-based semi-supervised classification technique to generate the final classification results.\n In order to evaluate the performance of our approach, we have created a benchmark data set where each shape is provided with a set of ground truth labels generated by Amazon's Mechanical Turk users. The benchmark contains a rich variety of shapes in a number of categories. Experimental results show that despite this variety, given very sparse and noisy initial labels, the new method yields results that are superior to state-of-the-art semi-supervised learning techniques.",
"title": ""
},
{
"docid": "2a600bc7d6e35335e1514597aa4c7a79",
"text": "Since the 2000s, Business Process Management (BPM) has evolved into a comprehensively studied discipline that goes beyond the boundaries of particular business processes. By also affecting enterprise-wide capabilities (such as an organisational culture and structure that support a processoriented way of working), BPM can now correctly be called Business Process Orientation (BPO). Meanwhile, various maturity models have been developed to help organisations adopt a processoriented way of working based on step-by-step best practices. The present article reports on a case study in which the process portfolio of an organisation is assessed by different maturity models that each cover a different set of process-oriented capabilities. The purpose is to reflect on how business process maturity is currently measured, and to explore relevant considerations for practitioners, scholars and maturity model designers. Therefore, we investigate a possible difference in maturity scores that are obtained based on model-related characteristics (e.g. capabilities, scale and calculation technique) and respondent-related characteristics (e.g. organisational function). For instance, based on an experimental design, the original maturity scores are recalculated for different maturity scales and different calculation techniques. Follow-up research can broaden our experiment from multiple maturity models in a single case to multiple maturity models in multiple cases.",
"title": ""
},
{
"docid": "ce94ff17f677b6c2c6c81295fa53b8df",
"text": "The Information Artifact Ontology (IAO) was created to serve as a domain‐neutral resource for the representation of types of information content entities (ICEs) such as documents, data‐bases, and digital im‐ ages. We identify a series of problems with the current version of the IAO and suggest solutions designed to advance our understanding of the relations between ICEs and associated cognitive representations in the minds of human subjects. This requires embedding IAO in a larger framework of ontologies, including most importantly the Mental Func‐ tioning Ontology (MFO). It also requires a careful treatment of the aboutness relations between ICEs and associated cognitive representa‐ tions and their targets in reality.",
"title": ""
},
{
"docid": "66363a46aa21f982d5934ff7a88efa6f",
"text": "Ensuring that organizational IT is in alignment with and provides support for an organization’s business strategy is critical to business success. Despite this, business strategy and strategic alignment issues are all but ignored in the requirements engineering research literature. We present B-SCP, a requirements engineering framework for organizational IT that directly addresses an organization’s business strategy and the alignment of IT requirements with that strategy. B-SCP integrates the three themes of strategy, context, and process using a requirements engineering notation for each theme. We demonstrate a means of cross-referencing and integrating the notations with each other, enabling explicit traceability between business processes and business strategy. In addition, we show a means of defining requirements problem scope as a Jackson problem diagram by applying a business modeling framework. Our approach is illustrated via application to an exemplar. The case example demonstrates the feasibility of B-SCP, and we present a comparison with other approaches. q 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e85397e0dbb7862fd292da4d0c61c6de",
"text": "Summary\nCrocoBLAST is a tool for dramatically speeding up BLAST+ execution on any computer. Alignments that would take days or weeks with NCBI BLAST+ can be run overnight with CrocoBLAST. Additionally, CrocoBLAST provides features critical for NGS data analysis, including: results identical to those of BLAST+; compatibility with any BLAST+ version; real-time information regarding calculation progress and remaining run time; access to partial alignment results; queueing, pausing, and resuming BLAST+ calculations without information loss.\n\n\nAvailability and implementation\nCrocoBLAST is freely available online, with ample documentation (webchem.ncbr.muni.cz/Platform/App/CrocoBLAST). No installation or user registration is required. CrocoBLAST is implemented in C, while the graphical user interface is implemented in Java. CrocoBLAST is supported under Linux and Windows, and can be run under Mac OS X in a Linux virtual machine.\n\n\nContact\[email protected].\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online.",
"title": ""
},
{
"docid": "de46cdbdfbf866c56950f62ef4f489e0",
"text": "BACKGROUND\nComputational methods have been used to find duplicate biomedical publications in MEDLINE. Full text articles are becoming increasingly available, yet the similarities among them have not been systematically studied. Here, we quantitatively investigated the full text similarity of biomedical publications in PubMed Central.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\n72,011 full text articles from PubMed Central (PMC) were parsed to generate three different datasets: full texts, sections, and paragraphs. Text similarity comparisons were performed on these datasets using the text similarity algorithm eTBLAST. We measured the frequency of similar text pairs and compared it among different datasets. We found that high abstract similarity can be used to predict high full text similarity with a specificity of 20.1% (95% CI [17.3%, 23.1%]) and sensitivity of 99.999%. Abstract similarity and full text similarity have a moderate correlation (Pearson correlation coefficient: -0.423) when the similarity ratio is above 0.4. Among pairs of articles in PMC, method sections are found to be the most repetitive (frequency of similar pairs, methods: 0.029, introduction: 0.0076, results: 0.0043). In contrast, among a set of manually verified duplicate articles, results are the most repetitive sections (frequency of similar pairs, results: 0.94, methods: 0.89, introduction: 0.82). Repetition of introduction and methods sections is more likely to be committed by the same authors (odds of a highly similar pair having at least one shared author, introduction: 2.31, methods: 1.83, results: 1.03). There is also significantly more similarity in pairs of review articles than in pairs containing one review and one nonreview paper (frequency of similar pairs: 0.0167 and 0.0023, respectively).\n\n\nCONCLUSION/SIGNIFICANCE\nWhile quantifying abstract similarity is an effective approach for finding duplicate citations, a comprehensive full text analysis is necessary to uncover all potential duplicate citations in the scientific literature and is helpful when establishing ethical guidelines for scientific publications.",
"title": ""
},
{
"docid": "95633e39a6f1dee70317edfc56e248f4",
"text": "We construct a deep portfolio theory. By building on Markowitz’s classic risk-return trade-off, we develop a self-contained four-step routine of encode, calibrate, validate and verify to formulate an automated and general portfolio selection process. At the heart of our algorithm are deep hierarchical compositions of portfolios constructed in the encoding step. The calibration step then provides multivariate payouts in the form of deep hierarchical portfolios that are designed to target a variety of objective functions. The validate step trades-off the amount of regularization used in the encode and calibrate steps. The verification step uses a cross validation approach to trace out an ex post deep portfolio efficient frontier. We demonstrate all four steps of our portfolio theory numerically.",
"title": ""
},
{
"docid": "f194075ba0a5cf69d9bba9e127ed29bb",
"text": "Let's start from scratch in thinking about what memory is for, and consequently, how it works. Suppose that memory and conceptualization work in the service of perception and action. In this case, conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory. Thus, how we perceive and conceive of the environment is determined by the types of bodies we have. Such a memory would not have associations. Instead, how concepts become related (and what it means to be related) is determined by how separate patterns of actions can be combined given the constraints of our bodies. I call this combination \"mesh.\" To avoid hallucination, conceptualization would normally be driven by the environment, and patterns of action from memory would play a supporting, but automatic, role. A significant human skill is learning to suppress the overriding contribution of the environment to conceptualization, thereby allowing memory to guide conceptualization. The effort used in suppressing input from the environment pays off by allowing prediction, recollective memory, and language comprehension. I review theoretical work in cognitive science and empirical work in memory and language comprehension that suggest that it may be possible to investigate connections between topics as disparate as infantile amnesia and mental-model theory.",
"title": ""
},
{
"docid": "42e53bc5c8fe1a2305b37687ea5c07c8",
"text": "The critical commentary by Reimers et al. [1] regarding the Penrose–Hameroff theory of ‘orchestrated objective reduction’ (‘Orch OR’) is largely uninformed and basically incorrect, as they solely criticize non-existent features of Orch OR, and ignore (1) actual Orch OR features, (2) supportive evidence, and (3) previous answers to their objections (Section 5.6 in our review [2]). Here we respond point-by-point to the issues they raise.",
"title": ""
},
{
"docid": "a1f007cf016e177de7b123c624391277",
"text": "Dental disease is among the most common causes for chinchillas and degus to present to veterinarians. Most animals with dental disease present with weight loss, reduced food intake/anorexia, and drooling. Degus commonly present with dyspnea. Dental disease has been primarily referred to as elongation and malocclusion of the cheek teeth. Periodontal disease, caries, and tooth resorption are common diseases in chinchillas, but are missed frequently during routine intraoral examination, even performed under general anesthesia. A diagnostic evaluation, including endoscopy-guided intraoral examination and diagnostic imaging of the skull, is necessary to detect oral disorders and to perform the appropriate therapy.",
"title": ""
},
{
"docid": "3b04e1e9550e5d6e9418ff955152d167",
"text": "This short report describes an automated BWAPI-based script developed for live streams of a StarCraft Brood War bot tournament, SSCAIT. The script controls the in-game camera in order to follow the relevant events and improve the viewer experience. We enumerate its novel features and provide a few implementation notes.",
"title": ""
},
{
"docid": "a85e4925e82baf96f507494c91126361",
"text": "Contractile myocytes provide a test of the hypothesis that cells sense their mechanical as well as molecular microenvironment, altering expression, organization, and/or morphology accordingly. Here, myoblasts were cultured on collagen strips attached to glass or polymer gels of varied elasticity. Subsequent fusion into myotubes occurs independent of substrate flexibility. However, myosin/actin striations emerge later only on gels with stiffness typical of normal muscle (passive Young's modulus, E approximately 12 kPa). On glass and much softer or stiffer gels, including gels emulating stiff dystrophic muscle, cells do not striate. In addition, myotubes grown on top of a compliant bottom layer of glass-attached myotubes (but not softer fibroblasts) will striate, whereas the bottom cells will only assemble stress fibers and vinculin-rich adhesions. Unlike sarcomere formation, adhesion strength increases monotonically versus substrate stiffness with strongest adhesion on glass. These findings have major implications for in vivo introduction of stem cells into diseased or damaged striated muscle of altered mechanical composition.",
"title": ""
},
{
"docid": "2ac1d3ce029f547213c122c0e84650b2",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/class#fall2012/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) Please indicate the submission time and number of late dates clearly in your submission. SCPD students: Please email your solutions to [email protected] with the subject line \" Problem Set 2 Submission \". The first page of your submission should be the homework routing form, which can be found on the SCPD website. Your submission (including the routing form) must be a single pdf file, or we may not be able to grade it. If you are writing your solutions out by hand, please write clearly and in a reasonably large font using a dark pen to improve legibility. 1. [15 points] Constructing kernels In class, we saw that by choosing a kernel K(x, z) = φ(x) T φ(z), we can implicitly map data to a high dimensional space, and have the SVM algorithm work in that space. One way to generate kernels is to explicitly define the mapping φ to a higher dimensional space, and then work out the corresponding K. However in this question we are interested in direct construction of kernels. I.e., suppose we have a function K(x, z) that we think gives an appropriate similarity measure for our learning problem, and we are considering plugging K into the SVM as the kernel function. However for K(x, z) to be a valid kernel, it must correspond to an inner product in some higher dimensional space resulting from some feature mapping φ. Mercer's theorem tells us that K(x, z) is a (Mercer) kernel if and only if for any finite set {x (1) ,. .. , x (m) }, the matrix K is symmetric and positive semidefinite, where the square matrix K ∈ R m×m is given by K ij = K(x (i) , x (j)). Now here comes the question: Let K 1 , K 2 be kernels …",
"title": ""
},
{
"docid": "22c85072db1f5b5a51b69fcabf01eb5e",
"text": "Websites’ and mobile apps’ privacy policies, written in natural language, tend to be long and difficult to understand. Information privacy revolves around the fundamental principle of notice and choice, namely the idea that users should be able to make informed decisions about what information about them can be collected and how it can be used. Internet users want control over their privacy, but their choices are often hidden in long and convoluted privacy policy documents. Moreover, little (if any) prior work has been done to detect the provision of choices in text. We address this challenge of enabling user choice by automatically identifying and extracting pertinent choice language in privacy policies. In particular, we present a two-stage architecture of classification models to identify opt-out choices in privacy policy text, labelling common varieties of choices with a mean F1 score of 0.735. Our techniques enable the creation of systems to help Internet users to learn about their choices, thereby effectuating notice and choice and improving Internet privacy.",
"title": ""
},
{
"docid": "b66e878b1d907c684637bf308ee9fd3f",
"text": "The search for free parking places is a promising application for vehicular ad hoc networks (VANETs). In order to guide drivers to a free parking place at their destination, it is necessary to estimate the occupancy state of the parking lots within the destination area at time of arrival. In this paper, we present a model to predict parking lot occupancy based on information exchanged among vehicles. In particular, our model takes the age of received parking lot information and the time needed to arrive at a certain parking lot into account and estimates the future parking situation at time of arrival. It is based on queueing theory and uses a continuous-time homogeneous Markov model. We have evaluated the model in a simulation study based on a detailed model of the city of Brunswick, Germany.",
"title": ""
},
{
"docid": "429abd1e12826273b7f4c1561f438911",
"text": "Recently, spin-transfer torque magnetic random access memory (STT-MRAM) has been considered as a promising universal memory candidate for future memory and computing systems, thanks to its nonvolatility, high speed, low power, good endurance, and scalability. However, as technology scales down, STT-MRAM suffers from serious process variations and thermal fluctuations, which greatly degrade the performance and stability of STT-MRAM. In general, the optimization and robustness of STT-MRAM under process variations often require a hybrid design flow and multilevel codesign strategies. In this paper, we quantitatively analyze the impacts of process variations and thermal fluctuations on the STT-MRAM performances from physics, technology, and circuit design point of views. Based on the analyses, we found that readability is becoming the newest challenge for deeply scaled STT-MRAM due to the conflict between sensing margin and read disturbance. To deal with this problem, a novel reconfigurable design strategy from device, circuit, and architecture codesign perspective is then presented. Finally, a conceptual hybrid magnetic/CMOS design flow is also proposed for STT-MRAM in deeply scaled technology nodes.",
"title": ""
},
{
"docid": "014306c73db11e9d9b9077868c94ed9f",
"text": "Flying Ad hoc Network (FANET) is a new resource-constrained breed and instantiation of Mobile Ad hoc Network (MANET) employing Unmanned Aerial Vehicles (UAVs) as communicating nodes. These latter follow a predefined path called 'mission' to provide a wide range of applications/services. Without loss of generality, the services and applications offered by the FANET are based on data/content delivery in various forms such as, but not limited to, pictures, video, status, warnings, and so on. Therefore, a content-centric communication mechanism such as Information Centric Networking (ICN) is essential for FANET. ICN addresses the problems of classical TCP/IP-based Internet. To this end, Content-centric networking (CCN), and Named Data Networking (NDN) are two of the most famous and widely-adapted implementations of ICN due to their intrinsic security mechanism and Interest/Data-based communication. To ensure data security, a signature on the contents is appended to each response/data packet in transit. However, trusted communication is of paramount importance and currently lacks in NDN-driven communication. To fill the gaps, in this paper, we propose a novel trust-aware Monitor-based communication architecture for Flying Named Data Networking (FNDN). We first select the monitors based on their trust and stability, which then become responsible for the interest packets dissemination to avoid broadcast storm problem. Once the interest reaches data producer, the data comes back to the requester through the shortest and most trusted path (which is also the same path through which the interest packet arrived at the producer). Simultaneously, the intermediate UAVs choose whether to check the data authenticity or not, following their subjective belief on its producer's behavior and thus-forth reducing the computation complexity and delay. Simulation results show that our proposal can sustain the vanilla NDN security levels exceeding the 80% dishonesty detection ratio while reducing the generated end-to-end delay to less than 1 s in the worst case and reducing the average consumed energy by more than two times.",
"title": ""
},
{
"docid": "c5ca7be10aec26359f27350494821cd7",
"text": "When moving through a tracked immersive virtual environment, it is sometimes useful to deviate from the normal one-to-one mapping of real to virtual motion. One option is the application of rotation gain, where the virtual rotation of a user around the vertical axis is amplified or reduced by a factor. Previous research in head-mounted display environments has shown that rotation gain can go unnoticed to a certain extent, which is exploited in redirected walking techniques. Furthermore, it can be used to increase the effective field of regard in projection systems. However, rotation gain has never been studied in CAVE systems, yet. In this work, we present an experiment with 87 participants examining the effects of rotation gain in a CAVE-like virtual environment. The results show no significant effects of rotation gain on simulator sickness, presence, or user performance in a cognitive task, but indicate that there is a negative influence on spatial knowledge especially for inexperienced users. In secondary results, we could confirm results of previous work and demonstrate that they also hold for CAVE environments, showing a negative correlation between simulator sickness and presence, cognitive performance and spatial knowledge, a positive correlation between presence and spatial knowledge, a mitigating influence of experience with 3D applications and previous CAVE exposure on simulator sickness, and a higher incidence of simulator sickness in women.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.