query_id
stringlengths
32
32
query
stringlengths
6
3.9k
positive_passages
listlengths
1
21
negative_passages
listlengths
10
100
subset
stringclasses
7 values
4e97169528430631823341734e2375ec
Rich Image Captioning in the Wild
[ { "docid": "6a1e614288a7977b72c8037d9d7725fb", "text": "We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.", "title": "" }, { "docid": "30260d1a4a936c79e6911e1e91c3a84a", "text": "Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent sentence. The second uses the penultimate activation layer of the CNN as input to a recurrent neural network (RNN) that then generates the caption sequence. In this paper, we compare the merits of these different language modeling approaches for the first time by using the same state-ofthe-art CNN as input. We examine issues in the different approaches, including linguistic irregularities, caption repetition, and data set overlap. By combining key aspects of the ME and RNN methods, we achieve a new record performance over previously published results on the benchmark COCO dataset. However, the gains we see in BLEU do not translate to human judgments.", "title": "" } ]
[ { "docid": "3a7a7fa5e41a6195ca16f172b72f89a1", "text": "To integrate unpredictable human behavior in the assessment of active and passive pedestrian safety systems, we introduce a virtual reality (VR)-based pedestrian simulation system. The device uses the Xsens Motion Capture platform and can be used without additional infrastructure. To show the systems applicability for pedestrian behavior studies, we conducted a pilot study evaluating the degree of realism such a system can achieve in a typical unregulated pedestrian crossing scenario. Six participants had to estimate vehicle speeds and distances in four scenarios with varying gaps between vehicles. First results indicate an acceptable level of realism so that the device can be used for further user studies addressing pedestrian behavior, pedestrian interaction with (automated) vehicles, risk assessment and investigation of the pre-crash phase without the risk of injuries.", "title": "" }, { "docid": "88cf953ba92b54f89cdecebd4153bee3", "text": "In this paper, we propose a novel object detection framework named \"Deep Regionlets\" by establishing a bridge between deep neural networks and conventional detection schema for accurate generic object detection. Motivated by the abilities of regionlets for modeling object deformation and multiple aspect ratios, we incorporate regionlets into an end-to-end trainable deep learning framework. The deep regionlets framework consists of a region selection network and a deep regionlet learning module. Specifically, given a detection bounding box proposal, the region selection network provides guidance on where to select regions to learn the features from. The regionlet learning module focuses on local feature selection and transformation to alleviate local variations. To this end, we first realize non-rectangular region selection within the detection framework to accommodate variations in object appearance. Moreover, we design a “gating network\" within the regionlet leaning module to enable soft regionlet selection and pooling. The Deep Regionlets framework is trained end-to-end without additional efforts. We perform ablation studies and conduct extensive experiments on the PASCAL VOC and Microsoft COCO datasets. The proposed framework outperforms state-of-theart algorithms, such as RetinaNet and Mask R-CNN, even without additional segmentation labels.", "title": "" }, { "docid": "b82c7c8f36ea16c29dfc5fa00a58b229", "text": "Green cloud computing has become a major concern in both industry and academia, and efficient scheduling approaches show promising ways to reduce the energy consumption of cloud computing platforms while guaranteeing QoS requirements of tasks. Existing scheduling approaches are inadequate for realtime tasks running in uncertain cloud environments, because those approaches assume that cloud computing environments are deterministic and pre-computed schedule decisions will be statically followed during schedule execution. In this paper, we address this issue. We introduce an interval number theory to describe the uncertainty of the computing environment and a scheduling architecture to mitigate the impact of uncertainty on the task scheduling quality for a cloud data center. Based on this architecture, we present a novel scheduling algorithm (PRS) that dynamically exploits proactive and reactive scheduling methods, for scheduling real-time, aperiodic, independent tasks. To improve energy efficiency, we propose three strategies to scale up and down the system’s computing resources according to workload to improve resource utilization and to reduce energy consumption for the cloud data center. We conduct extensive experiments to compare PRS with four typical baseline scheduling algorithms. The experimental results show that PRS performs better than those algorithms, and can effectively improve the performance of a cloud data center.", "title": "" }, { "docid": "215bb5273dbf5c301ae4170b5da39a34", "text": "We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. This method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.", "title": "" }, { "docid": "e2606242fcc89bfcf5c9c4cd71dd2c18", "text": "This letter introduces the class of generalized punctured convolutional codes (GPCCs), which is broader than and encompasses the class of the standard punctured convolutional codes (PCCs). A code in this class can be represented by a trellis module, the GPCC trellis module, whose topology resembles that of the minimal trellis module. he GPCC trellis module for a PCC is isomorphic to the minimal trellis module. A list containing GPCCs with better distance spectrum than the best known PCCs with same code rate and trellis complexity is presented.", "title": "" }, { "docid": "316e4fa32d0b000e6f833d146a9e0d80", "text": "Magnetic equivalent circuits (MECs) are becoming an accepted alternative to electrical-equivalent lumped-parameter models and finite-element analysis (FEA) for simulating electromechanical devices. Their key advantages are moderate computational effort, reasonable accuracy, and flexibility in model size. MECs are easily extended into three dimensions. But despite the successful use of MEC as a modeling tool, a generalized 3-D formulation useable for a comprehensive computer-aided design tool has not yet emerged (unlike FEA, where general modeling tools are readily available). This paper discusses the framework of a 3-D MEC modeling approach, and presents the implementation of a variable-sized reluctance network distribution based on 3-D elements. Force calculation and modeling of moving objects are considered. Two experimental case studies, a soft-ferrite inductor and an induction machine, show promising results when compared to measurements and simulations of lumped parameter and FEA models.", "title": "" }, { "docid": "b058bbc1485f99f37c0d72b960dd668b", "text": "In two experiments short-term forgetting was investigated in a short-term cued recall task designed to examine proactive interference effects. Mixed modality study lists were tested at varying retention intervals using verbal and non-verbal distractor activities. When an interfering foil was read aloud and a target item read silently, strong PI effects were observed for both types of distractor activity. When the target was read aloud and followed by a verbal distractor activity, weak PI effects emerged. However, when a target item was read aloud and non-verbal distractor activity filled the retention interval, performance was immune to the effects of PI for at least eight seconds. The results indicate that phonological representations of items read aloud still influence performance after 15 seconds of distractor activity. Short-term Forgetting 3 Determinants of Short-term Forgetting: Decay, Retroactive Interference or Proactive Interference? Most current models of short-term memory assert that to-be-remembered items are represented in terms of easily degraded phonological representations. However, there is disagreement on how the traces become degraded. Some propose that trace degradation is due to decay brought about by the prevention of rehearsal (Baddeley, 1986; Burgess & Hitch, 1992; 1996), or a switch in attention (Cowan, 1993); others attribute degradation to retroactive interference (RI) from other list items (Nairne, 1990; Tehan & Fallon; in press; Tehan & Humphreys, 1998). We want to add proactive interference (PI) to the possible causes of short-term forgetting, and by showing how PI effects change as a function of the type of distractor task employed during a filled retention interval, we hope to evaluate the causes of trace degradation. By manipulating the type of distractor activity in a brief retention interval it is possible to test some of the assumptions about decay versus interference explanations of short-term forgetting. The decay position is quite straightforward. If rehearsal is prevented, then the trace should decay; the type of distractor activity should be immaterial as long as rehearsal is prevented. From the interference perspective both the Feature Model (Nairne, 1990) and the Tehan and Humphreys (1995,1998) connectionist model predict that there should be occasions where very little forgetting occurs. In the Feature Model items are represented as sets of modality dependent and modality independent features. Forgetting occurs when adjacent list items have common features. Some of the shared features of the first item are overwritten by the latter item, thereby producing a trace that bears only partial resemblance to the Short-term Forgetting 4 original item. One occasion in which interference would be minimized is when an auditory list is followed by a non-auditory distractor task. The modality dependent features of the list items would not be overwritten or degraded by the distractor activity because the modality dependent features of the list and distractor items are different to each other. By the same logic, a visually presented list should not be affected by an auditory distractor task, since modality specific features are again different in each case. In the Tehan and Humphreys (1995) approach, presentation modality is related to the strength of phonological representations that support recall. They assume that auditory activity produces stronger representations than does visual activity. Thus this model also predicts that when a list is presented auditorially, it will not be much affected by subsequent non-auditory distractor activity. However, in the case of a visual list with auditory distraction, the assumption would be that interference would be maximised. The phonological codes for the list items would be relatively weak in the first instance and a strong source of auditory retroactive interference follows. This prediction is the opposite of that derived from the Feature Model. Since PI effects appear to be sensitive to retention interval effects (Tehan & Humphreys, 1995; Wickens, Moody & Dow, 1981), we have chosen to employ a PI task to explore these differential predictions. We have recently developed a short-term cued recall task in which PI can easily be manipulated (Tehan & Humphreys, 1995; 1996; 1998). In this task, participants study a series of trials in which items are presented in blocks of four items with each trial consisting of either one or two blocks. Each trial has a target item that is an instance of either a taxonomic or rhyme category, and the category label is presented at test as a retrieval cue. The two-block trials are the important trials Short-term Forgetting 5 because it is in these trials that PI is manipulated. In these trials the two blocks are presented under directed forgetting instructions. That is, once participants find out that it is a two-block trial they are to forget the first block and remember the second block because the second block contains the target item. On control trials, all nontarget items in both blocks are unrelated to the target. On interference trials, a foil that is related to the target is embedded among three other to-be-forgotten fillers in the first block and the target is embedded among three unrelated filler items in the second block. Following the presentation of the second block the category cue is presented and subjects are asked to recall the word from the second block that is an instance of that category. Using this task we have been able to show that when taxonomic categories are used on an immediate test (e.g., dog is the foil, cat is the target and ANIMAL is the cue), performance is immune to PI. However, when recall is tested after a 2-second filled retention interval, PI effects are observed; target recall is depressed and the foil is often recalled instead of the target. In explaining these results, Tehan and Humphreys (1995) assumed that items were represented in terms of sets of features. The representation of an item was seen to involve both semantic and phonological features, with the phonological features playing a dominant role in item recall. They assumed that the cue would elicit the representations of the two items in the list, and that while the semantic features of both target and foil would be available, only the target would have active phonological features. Thus on an immediate test, knowing that the target ended in -at would make the task of discriminating between cat and dog relatively easy. On a delayed test they assumed that all phonological features were inactive and the absence of phonological information would make discrimination more difficult. Short-term Forgetting 6 A corollary of the Tehan and Humphreys (1995) assumption is that if phonological codes could be provided for a non-rhyming foil, then discrimination should again be problematic. Presentation modality is one variable that appears to produce differences in strength of phonological codes with reading aloud producing stronger representations than reading silently. Tehan and Humphreys (Experiment 5) varied the modality of the two blocks such that participants either read the first block silently and then read the second block aloud or vice versa. In the silent aloud condition performance was immune to PI. The assumption was that the phonological representation of the target item in the second block was very strong with the result that there were no problems in discrimination. However, PI effects were present in the aloud-silent condition. The phonological representation of the read-aloud foil appeared to serve as a strong source of competition to the read-silently target item. All the above research has been based on the premise that phonological representations for visually presented items are weak and rapidly lose their ability to support recall. This assumption seems tenable given that phonological similarity effects and phonological intrusion effects in serial recall are attenuated rapidly with brief periods of distractor activity (Conrad, 1967; Estes, 1973; Tehan & Humphreys, 1995). The cued recall experiments that have used a filled retention interval have always employed silent visual presentation of the study list and required spoken shadowing of the distractor items. That is, the phonological representations of both target and foil are assumed to be quite weak and the shadowing task would provide a strong source of interference. These are likely to be the conditions that produce maximum levels of PI. The patterns of PI may change with mixed modality study lists and alternative forms of distractor activity. For example, given a strong phonological representation of the target, weak representations of the foil and a weak source of Short-term Forgetting 7 retroactive interference, it might be possible to observe immunity to PI on a delayed test. The following experiments explore the relationship between presentation modality, distractor modality and PI Experiment 1 The Tehan and Humphreys (1995) mixed modality experiment indicated that PI effects were sensitive to the modalities of the first and second block of items. In the current study we use mixed modality study lists but this time include a two-second retention interval, the same as that used by Tehan and Humphreys. However, the modality of the distractor activity was varied as well. Participants either had to respond aloud verbally or make a manual response that did not involve any verbal output. From the Tehan and Humphreys perspective the assumption made is that the verbal distractor activity will produce more disruption to the phonological representation of the target item than will a non-verbal distractor activity and the PI will be observed. However, it is quite possible that with silent-aloud presentation and a non-verbal distractor activity immunity to PI might be maintained across a twosecond retention interval. From the Nairne perspective, interfe", "title": "" }, { "docid": "b1239f2e9bfec604ac2c9851c8785c09", "text": "BACKGROUND\nDecoding neural activities associated with limb movements is the key of motor prosthesis control. So far, most of these studies have been based on invasive approaches. Nevertheless, a few researchers have decoded kinematic parameters of single hand in non-invasive ways such as magnetoencephalogram (MEG) and electroencephalogram (EEG). Regarding these EEG studies, center-out reaching tasks have been employed. Yet whether hand velocity can be decoded using EEG recorded during a self-routed drawing task is unclear.\n\n\nMETHODS\nHere we collected whole-scalp EEG data of five subjects during a sequential 4-directional drawing task, and employed spatial filtering algorithms to extract the amplitude and power features of EEG in multiple frequency bands. From these features, we reconstructed hand movement velocity by Kalman filtering and a smoothing algorithm.\n\n\nRESULTS\nThe average Pearson correlation coefficients between the measured and the decoded velocities are 0.37 for the horizontal dimension and 0.24 for the vertical dimension. The channels on motor, posterior parietal and occipital areas are most involved for the decoding of hand velocity. By comparing the decoding performance of the features from different frequency bands, we found that not only slow potentials in 0.1-4 Hz band but also oscillatory rhythms in 24-28 Hz band may carry the information of hand velocity.\n\n\nCONCLUSIONS\nThese results provide another support to neural control of motor prosthesis based on EEG signals and proper decoding methods.", "title": "" }, { "docid": "1fb87bc370023dc3fdfd9c9097288e71", "text": "Protein is essential for living organisms, but digestibility of crude protein is poorly understood and difficult to predict. Nitrogen is used to estimate protein content because nitrogen is a component of the amino acids that comprise protein, but a substantial portion of the nitrogen in plants may be bound to fiber in an indigestible form. To estimate the amount of crude protein that is unavailable in the diets of mountain gorillas (Gorilla beringei) in Bwindi Impenetrable National Park, Uganda, foods routinely eaten were analyzed to determine the amount of nitrogen bound to the acid-detergent fiber residue. The amount of fiber-bound nitrogen varied among plant parts: herbaceous leaves 14.5+/-8.9% (reported as a percentage of crude protein on a dry matter (DM) basis), tree leaves (16.1+/-6.7% DM), pith/herbaceous peel (26.2+/-8.9% DM), fruit (34.7+/-17.8% DM), bark (43.8+/-15.6% DM), and decaying wood (85.2+/-14.6% DM). When crude protein and available protein intake of adult gorillas was estimated over a year, 15.1% of the dietary crude protein was indigestible. These results indicate that the proportion of fiber-bound protein in primate diets should be considered when estimating protein intake, food selection, and food/habitat quality.", "title": "" }, { "docid": "60e56a59ecbdee87005407ed6a117240", "text": "The visionary Steve Jobs said, “A lot of times, people don’t know what they want until you show it to them.” A powerful recommender system not only shows people similar items, but also helps them discover what they might like, and items that complement what they already purchased. In this paper, we attempt to instill a sense of “intention” and “style” into our recommender system, i.e., we aim to recommend items that are visually complementary with those already consumed. By identifying items that are visually coherent with a query item/image, our method facilitates exploration of the long tail items, whose existence users may be even unaware of. This task is formulated only recently by Julian et al. [1], with the input being millions of item pairs that are frequently viewed/bought together, entailing noisy style coherence. In the same work, the authors proposed a Mahalanobisbased transform to discriminate a given pair to be sharing a same style or not. Despite its success, we experimentally found that it’s only able to recommend items on the margin of different clusters, which leads to limited coverage of the items to be recommended. Another limitation is it totally ignores the existence of taxonomy information that is ubiquitous in many datasets like Amazon the authors experimented with. In this report, we propose two novel methods that make use of the hierarchical category metadata to overcome the limitations identified above. The main contributions are listed as following.", "title": "" }, { "docid": "0c420c064519e15e071660c750c0b7e3", "text": "In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.", "title": "" }, { "docid": "4ca7e1893c0ab71d46af4954f7daf58e", "text": "Identifying coordinate transformations that make strongly nonlinear dynamics approximately linear has the potential to enable nonlinear prediction, estimation, and control using linear theory. The Koopman operator is a leading data-driven embedding, and its eigenfunctions provide intrinsic coordinates that globally linearize the dynamics. However, identifying and representing these eigenfunctions has proven challenging. This work leverages deep learning to discover representations of Koopman eigenfunctions from data. Our network is parsimonious and interpretable by construction, embedding the dynamics on a low-dimensional manifold. We identify nonlinear coordinates on which the dynamics are globally linear using a modified auto-encoder. We also generalize Koopman representations to include a ubiquitous class of systems with continuous spectra. Our framework parametrizes the continuous frequency using an auxiliary network, enabling a compact and efficient embedding, while connecting our models to decades of asymptotics. Thus, we benefit from the power of deep learning, while retaining the physical interpretability of Koopman embeddings. It is often advantageous to transform a strongly nonlinear system into a linear one in order to simplify its analysis for prediction and control. Here the authors combine dynamical systems with deep learning to identify these hard-to-find transformations.", "title": "" }, { "docid": "eeff1f2e12e5fc5403be8c2d7ca4d10c", "text": "Optical Character Recognition (OCR) systems have been effectively developed for the recognition of printed script. The accuracy of OCR system mainly depends on the text preprocessing and segmentation algorithm being used. When the document is scanned it can be placed in any arbitrary angle which would appear on the computer monitor at the same angle. This paper addresses the algorithm for correction of skew angle generated in scanning of the text document and a novel profile based method for segmentation of printed text which separates the text in document image into lines, words and characters. Keywords—Skew correction, Segmentation, Text preprocessing, Horizontal Profile, Vertical Profile.", "title": "" }, { "docid": "ce8914e02eeed8fb228b5b2950cf87de", "text": "Different alternatives to detect and diagnose faults in induction machines have been proposed and implemented in the last years. The technology of artificial neural networks has been successfully used to solve the motor incipient fault detection problem. The characteristics, obtained by this technique, distinguish them from the traditional ones, which, in most cases, need that the machine which is being analyzed is not working to do the diagnosis. This paper reviews an artificial neural network (ANN) based technique to identify rotor faults in a three-phase induction motor. The main types of faults considered are broken bar and dynamic eccentricity. At light load, it is difficult to distinguish between healthy and faulty rotors because the characteristic broken rotor bar fault frequencies are very close to the fundamental component and their amplitudes are small in comparison. As a result, detection of the fault and classification of the fault severity under light load is almost impossible. In order to overcome this problem, the detection of rotor faults in induction machines is done by analysing the starting current using a newly developed quantification technique based on artificial neural networks.", "title": "" }, { "docid": "33b4ba89053ed849d23758f6e3b06b09", "text": "We develop a deep architecture to learn to find good correspondences for wide-baseline stereo. Given a set of putative sparse matches and the camera intrinsics, we train our network in an end-to-end fashion to label the correspondences as inliers or outliers, while simultaneously using them to recover the relative pose, as encoded by the essential matrix. Our architecture is based on a multi-layer perceptron operating on pixel coordinates rather than directly on the image, and is thus simple and small. We introduce a novel normalization technique, called Context Normalization, which allows us to process each data point separately while embedding global information in it, and also makes the network invariant to the order of the correspondences. Our experiments on multiple challenging datasets demonstrate that our method is able to drastically improve the state of the art with little training data.", "title": "" }, { "docid": "2aae53713324b297f0e145ef8d808ce9", "text": "In this paper some theoretical and (potentially) practical aspects of quantum computing are considered. Using the tools of transcendental number theory it is demonstrated that quantum Turing machines (QTM) with rational amplitudes are sufficient to define the class of bounded error quantum polynomial time (BQP) introduced by Bernstein and Vazirani [Proc. 25th ACM Symposium on Theory of Computation, 1993, pp. 11–20, SIAM J. Comput., 26 (1997), pp. 1411–1473]. On the other hand, if quantum Turing machines are allowed unrestricted amplitudes (i.e., arbitrary complex amplitudes), then the corresponding BQP class has uncountable cardinality and contains sets of all Turing degrees. In contrast, allowing unrestricted amplitudes does not increase the power of computation for error-free quantum polynomial time (EQP). Moreover, with unrestricted amplitudes, BQP is not equal to EQP. The relationship between quantum complexity classes and classical complexity classes is also investigated. It is shown that when quantum Turing machines are restricted to have transition amplitudes which are algebraic numbers, BQP, EQP, and nondeterministic quantum polynomial time (NQP) are all contained in PP, hence in P#P and PSPACE. A potentially practical issue of designing “machine independent” quantum programs is also addressed. A single (“almost universal”) quantum algorithm based on Shor’s method for factoring integers is developed which would run correctly on almost all quantum computers, even if the underlying unitary transformations are unknown to the programmer and the device builder.", "title": "" }, { "docid": "f617b8b5c2c5fc7829cbcd0b2e64ed2d", "text": "This paper proposes a novel lifelong learning (LL) approach to sentiment classification. LL mimics the human continuous learning process, i.e., retaining the knowledge learned from past tasks and use it to help future learning. In this paper, we first discuss LL in general and then LL for sentiment classification in particular. The proposed LL approach adopts a Bayesian optimization framework based on stochastic gradient descent. Our experimental results show that the proposed method outperforms baseline methods significantly, which demonstrates that lifelong learning is a promising research direction.", "title": "" }, { "docid": "925709dfe0d0946ca06d05b290f2b9bd", "text": "Mentalization, operationalized as reflective functioning (RF), can play a crucial role in the psychological mechanisms underlying personality functioning. This study aimed to: (a) study the association between RF, personality disorders (cluster level) and functioning; (b) investigate whether RF and personality functioning are influenced by (secure vs. insecure) attachment; and (c) explore the potential mediating effect of RF on the relationship between attachment and personality functioning. The Shedler-Westen Assessment Procedure (SWAP-200) was used to assess personality disorders and levels of psychological functioning in a clinical sample (N = 88). Attachment and RF were evaluated with the Adult Attachment Interview (AAI) and Reflective Functioning Scale (RFS). Findings showed that RF had significant negative associations with cluster A and B personality disorders, and a significant positive association with psychological functioning. Moreover, levels of RF and personality functioning were influenced by attachment patterns. Finally, RF completely mediated the relationship between (secure/insecure) attachment and adaptive psychological features, and thus accounted for differences in overall personality functioning. Lack of mentalization seemed strongly associated with vulnerabilities in personality functioning, especially in patients with cluster A and B personality disorders. These findings provide support for the development of therapeutic interventions to improve patients' RF.", "title": "" }, { "docid": "9a1d6be6fbce508e887ee4e06a932cd2", "text": "For ranked search in encrypted cloud data, order preserving encryption (OPE) is an efficient tool to encrypt relevance scores of the inverted index. When using deterministic OPE, the ciphertexts will reveal the distribution of relevance scores. Therefore, Wang et al. proposed a probabilistic OPE, called one-to-many OPE, for applications of searchable encryption, which can flatten the distribution of the plaintexts. In this paper, we proposed a differential attack on one-to-many OPE by exploiting the differences of the ordered ciphertexts. The experimental results show that the cloud server can get a good estimate of the distribution of relevance scores by a differential attack. Furthermore, when having some background information on the outsourced documents, the cloud server can accurately infer the encrypted keywords using the estimated distributions.", "title": "" }, { "docid": "460e8daf5dfc9e45c3ade5860aa9cc57", "text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.", "title": "" } ]
scidocsrr
56bff8526270ff83758c75bc68eb1666
Development of a cloud-based RTAB-map service for robots
[ { "docid": "82835828a7f8c073d3520cdb4b6c47be", "text": "Simultaneous Localization and Mapping (SLAM) for mobile robots is a computationally expensive task. A robot capable of SLAM needs a powerful onboard computer, but this can limit the robot's mobility because of weight and power demands. We consider moving this task to a remote compute cloud, by proposing a general cloud-based architecture for real-time robotics computation, and then implementing a Rao-Blackwellized Particle Filtering-based SLAM algorithm in a multi-node cluster in the cloud. In our implementation, expensive computations are executed in parallel, yielding significant improvements in computation time. This allows the algorithm to increase the complexity and frequency of calculations, enhancing the accuracy of the resulting map while freeing the robot's onboard computer for other tasks. Our method for implementing particle filtering in the cloud is not specific to SLAM and can be applied to other computationally-intensive tasks.", "title": "" } ]
[ { "docid": "3eaa3a1a3829345aaa597cf843f720d6", "text": "Relationship science is a theory-rich discipline, but there have been no attempts to articulate the broader themes or principles that cut across the theories themselves. We have sought to fill that void by reviewing the psychological literature on close relationships, particularly romantic relationships, to extract its core principles. This review reveals 14 principles, which collectively address four central questions: (a) What is a relationship? (b) How do relationships operate? (c) What tendencies do people bring to their relationships? (d) How does the context affect relationships? The 14 principles paint a cohesive and unified picture of romantic relationships that reflects a strong and maturing discipline. However, the principles afford few of the sorts of conflicting predictions that can be especially helpful in fostering novel theory development. We conclude that relationship science is likely to benefit from simultaneous pushes toward both greater integration across theories (to reduce redundancy) and greater emphasis on the circumstances under which existing (or not-yet-developed) principles conflict with one another.", "title": "" }, { "docid": "5de11e0cbfce77414d1c552007d63892", "text": "© 2012 Cassisi et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Similarity Measures and Dimensionality Reduction Techniques for Time Series Data Mining", "title": "" }, { "docid": "0d5ba680571a9051e70ababf0c685546", "text": "• Current deep RL techniques require large amounts of data to find a good policy • Once found, the policy remains a black box to practitioners • Practitioners cannot verify that the policy is making decisions based on reasonable information • MOREL (Motion-Oriented REinforcement Learning) automatically detects moving objects and uses the relevant information for action selection • We gather a dataset using a uniform random policy • Train a network without supervision to capture a structured representation of motion between frames • Network predicts object masks, object motion, and camera motion to warp one frame into the next Introduction Learning to Segment Moving Objects Experiments Visualization", "title": "" }, { "docid": "6e675e8a57574daf83ab78cea25688f5", "text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore “unsupervised” approaches to quality prediction that does not require labelled data. An alternate technique is to use “supervised” approaches that learn models from project data labelled with, say, “defective” or “not-defective”. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSE’16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.", "title": "" }, { "docid": "264c63f249f13bf3eb4fd5faac8f4fa0", "text": "This paper presents the study to investigate the possibility of the stand-alone micro hydro for low-cost electricity production which can satisfy the energy load requirements of a typical remote and isolated rural area. In this framework, the feasibility study in term of the technical and economical performances of the micro hydro system are determined according to the rural electrification concept. The proposed axial flux permanent magnet (AFPM) generator will be designed for micro hydro under sustainable development to optimize between cost and efficiency by using the local materials and basic engineering knowledge. First of all, the simple simulation of micro hydro model for lighting system is developed by considering the optimal size of AFPM generator. The simulation results show that the optimal micro hydro power plant with 70 W can supply the 9 W compact fluorescent up to 20 set for 8 hours by using pressure of water with 6 meters and 0.141 m3/min of flow rate. Lastly, a proposed micro hydro power plant can supply lighting system for rural electrification up to 525.6 kWh/year or 1,839.60 Baht/year and reduce 0.33 ton/year of CO2 emission.", "title": "" }, { "docid": "bf57a5fcf6db7a9b26090bd9a4b65784", "text": "Plate osteosynthesis is still recognized as the treatment of choice for most articular fractures, many metaphyseal fractures, and certain diaphyseal fractures such as in the forearm. Since the 1960s, both the techniques and implants used for internal fixation with plates have evolved to provide for improved healing. Most recently, plating methods have focused on the principles of 'biological fixation'. These methods attempt to preserve the blood supply to improve the rate of fracture healing, decrease the need for bone grafting, and decrease the incidence of infection and re-fracture. The purpose of this article is to provide a brief overview of the history of plate osteosynthesis as it relates to the development of the latest minimally invasive surgical techniques.", "title": "" }, { "docid": "fe95e139aab1453750224bd856059fcf", "text": "IMPORTANCE\nChronic sinusitis is a common inflammatory condition defined by persistent symptomatic inflammation of the sinonasal cavities lasting longer than 3 months. It accounts for 1% to 2% of total physician encounters and is associated with large health care expenditures. Appropriate use of medical therapies for chronic sinusitis is necessary to optimize patient quality of life (QOL) and daily functioning and minimize the risk of acute inflammatory exacerbations.\n\n\nOBJECTIVE\nTo summarize the highest-quality evidence on medical therapies for adult chronic sinusitis and provide an evidence-based approach to assist in optimizing patient care.\n\n\nEVIDENCE REVIEW\nA systematic review searched Ovid MEDLINE (1947-January 30, 2015), EMBASE, and Cochrane Databases. The search was limited to randomized clinical trials (RCTs), systematic reviews, and meta-analyses. Evidence was categorized into maintenance and intermittent or rescue therapies and reported based on the presence or absence of nasal polyps.\n\n\nFINDINGS\nTwenty-nine studies met inclusion criteria: 12 meta-analyses (>60 RCTs), 13 systematic reviews, and 4 RCTs that were not included in any of the meta-analyses. Saline irrigation improved symptom scores compared with no treatment (standardized mean difference [SMD], 1.42 [95% CI, 1.01 to 1.84]; a positive SMD indicates improvement). Topical corticosteroid therapy improved overall symptom scores (SMD, -0.46 [95% CI, -0.65 to -0.27]; a negative SMD indicates improvement), improved polyp scores (SMD, -0.73 [95% CI, -1.0 to -0.46]; a negative SMD indicates improvement), and reduced polyp recurrence after surgery (relative risk, 0.59 [95% CI, 0.45 to 0.79]). Systemic corticosteroids and oral doxycycline (both for 3 weeks) reduced polyp size compared with placebo for 3 months after treatment (P < .001). Leukotriene antagonists improved nasal symptoms compared with placebo in patients with nasal polyps (P < .01). Macrolide antibiotic for 3 months was associated with improved QOL at a single time point (24 weeks after therapy) compared with placebo for patients without polyps (SMD, -0.43 [95% CI, -0.82 to -0.05]).\n\n\nCONCLUSIONS AND RELEVANCE\nEvidence supports daily high-volume saline irrigation with topical corticosteroid therapy as a first-line therapy for chronic sinusitis. A short course of systemic corticosteroids (1-3 weeks), short course of doxycycline (3 weeks), or a leukotriene antagonist may be considered in patients with nasal polyps. A prolonged course (3 months) of macrolide antibiotic may be considered for patients without polyps.", "title": "" }, { "docid": "983ec9cdd75d0860c96f89f3c9b2f752", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "db822d9deda1a707b6e6385c79aa93e2", "text": "We propose simple tangible language elements for very young children to use when constructing programmes. The equivalent Turtle Talk instructions are given for comparison. Two examples of the tangible language code are shown to illustrate alternative methods of solving a given challenge.", "title": "" }, { "docid": "980dc3d4b01caac3bf56df039d5ca513", "text": "In this paper, we study object detection using a large pool of unlabeled images and only a few labeled images per category, named \"few-example object detection\". The key challenge consists in generating trustworthy training samples as many as possible from the pool. Using few training examples as seeds, our method iterates between model training and high-confidence sample selection. In training, easy samples are generated first and, then the poorly initialized model undergoes improvement. As the model becomes more discriminative, challenging but reliable samples are selected. After that, another round of model improvement takes place. To further improve the precision and recall of the generated training samples, we embed multiple detection models in our framework, which has proven to outperform the single model baseline and the model ensemble method. Experiments on PASCAL VOC'07, MS COCO'14, and ILSVRC'13 indicate that by using as few as three or four samples selected for each category, our method produces very competitive results when compared to the state-of-the-art weakly-supervised approaches using a large number of image-level labels.", "title": "" }, { "docid": "c62bc7391e55d66c9e27befe81446ebe", "text": "Opaque predicates have been widely used to insert superfluous branches for control flow obfuscation. Opaque predicates can be seamlessly applied together with other obfuscation methods such as junk code to turn reverse engineering attempts into arduous work. Previous efforts in detecting opaque predicates are far from mature. They are either ad hoc, designed for a specific problem, or have a considerably high error rate. This paper introduces LOOP, a Logic Oriented Opaque Predicate detection tool for obfuscated binary code. Being different from previous work, we do not rely on any heuristics; instead we construct general logical formulas, which represent the intrinsic characteristics of opaque predicates, by symbolic execution along a trace. We then solve these formulas with a constraint solver. The result accurately answers whether the predicate under examination is opaque or not. In addition, LOOP is obfuscation resilient and able to detect previously unknown opaque predicates. We have developed a prototype of LOOP and evaluated it with a range of common utilities and obfuscated malicious programs. Our experimental results demonstrate the efficacy and generality of LOOP. By integrating LOOP with code normalization for matching metamorphic malware variants, we show that LOOP is an appealing complement to existing malware defenses.", "title": "" }, { "docid": "0b08e657d012d26310c88e2129c17396", "text": "In order to accurately determine the growth of greenhouse crops, the system based on AVR Single Chip microcontroller and wireless sensor networks is developed, it transfers data through the wireless transceiver devices without setting up electric wiring, the system structure is simple. The monitoring and management center can control the temperature and humidity of the greenhouse, measure the carbon dioxide content, and collect the information about intensity of illumination, and so on. In addition, the system adopts multilevel energy memory. It combines energy management with energy transfer, which makes the energy collected by solar energy batteries be used reasonably. Therefore, the self-managing energy supply system is established. The system has advantages of low power consumption, low cost, good robustness, extended flexible. An effective tool is provided for monitoring and analysis decision-making of the greenhouse environment.", "title": "" }, { "docid": "7c8d1b0c77acb4fd6db6e7f887e66133", "text": "Subdural hematomas (SDH) in infants often result from nonaccidental head injury (NAHI), which is diagnosed based on the absence of history of trauma and the presence of associated lesions. When these are lacking, the possibility of spontaneous SDH in infant (SSDHI) is raised, but this entity is hotly debated; in particular, the lack of positive diagnostic criteria has hampered its recognition. The role of arachnoidomegaly, idiopathic macrocephaly, and dehydration in the pathogenesis of SSDHI is also much discussed. We decided to analyze apparent cases of SSDHI from our prospective databank. We selected cases of SDH in infants without systemic disease, history of trauma, and suspicion of NAHI. All cases had fundoscopy and were evaluated for possible NAHI. Head growth curves were reconstructed in order to differentiate idiopathic from symptomatic macrocrania. Sixteen patients, 14 males and two females, were diagnosed with SSDHI. Twelve patients had idiopathic macrocrania, seven of these being previously diagnosed with arachnoidomegaly on imaging. Five had risk factors for dehydration, including two with severe enteritis. Two patients had mild or moderate retinal hemorrhage, considered not indicative of NAHI. Thirteen patients underwent cerebrospinal fluid drainage. The outcome was favorable in almost all cases; one child has sequels, which were attributable to obstetrical difficulties. SSDHI exists but is rare and cannot be diagnosed unless NAHI has been questioned thoroughly. The absence of traumatic features is not sufficient, and positive elements like macrocrania, arachnoidomegaly, or severe dehydration are necessary for the diagnosis of SSDHI.", "title": "" }, { "docid": "0ad4432a79ea6b3eefbe940adf55ff7b", "text": "This study reviews the long-term outcome of prostheses and fixtures (implants) in 759 totally edentulous jaws of 700 patients. A total of 4,636 standard fixtures were placed and followed according to the osseointegration method for a maximum of 24 years by the original team at the University of Göteborg. Standardized annual clinical and radiographic examinations were conducted as far as possible. A lifetable approach was applied for statistical analysis. Sufficient numbers of fixtures and prostheses for a detailed statistical analysis were present for observation times up to 15 years. More than 95% of maxillae had continuous prosthesis stability at 5 and 10 years, and at least 92% at 15 years. The figure for mandibles was 99% at all time intervals. Calculated from the time of fixture placement, the estimated survival rates for individual fixtures in the maxilla were 84%, 89%, and 92% at 5 years; 81% and 82% at 10 years; and 78% at 15 years. In the mandible they were 91%, 98%, and 99% at 5 years; 89% and 98% at 10 years; and 86% at 15 years. (The different percentages at 5 and 10 years refer to results for different routine groups of fixtures with 5 to 10, 10 to 15, and 1 to 5 years of observation time, respectively.) The results of this study concur with multicenter and earlier results for the osseointegration method.", "title": "" }, { "docid": "2b688f9ca05c2a79f896e3fee927cc0d", "text": "This paper presents a new synchronous-reference frame (SRF)-based control method to compensate power-quality (PQ) problems through a three-phase four-wire unified PQ conditioner (UPQC) under unbalanced and distorted load conditions. The proposed UPQC system can improve the power quality at the point of common coupling on power distribution systems under unbalanced and distorted load conditions. The simulation results based on Matlab/Simulink are discussed in detail to support the SRF-based control method presented in this paper. The proposed approach is also validated through experimental study with the UPQC hardware prototype.", "title": "" }, { "docid": "2a7983e91cd674d95524622e82c4ded7", "text": "• FC (fully-connected) layer takes the pooling results, produces features FROI, Fcontext, Fframe, and feeds them into two streams, inspired by [BV16]. • Classification stream produces a matrix of classification scores S = [FCcls(FROI1); . . . ;FCcls(FROIK)] ∈ RK×C • Localization stream implements the proposed context-aware guidance that uses FROIk, Fcontextk, Fframek to produce a localization score matrix L ∈ RK×C.", "title": "" }, { "docid": "9e208a394475931aafdcdfbad1408489", "text": "Ocular complications following cosmetic filler injections are serious situations. This study provided scientific evidence that filler in the facial and the superficial temporal arteries could enter into the orbits and the globes on both sides. We demonstrated the existence of an embolic channel connecting the arterial system of the face to the ophthalmic artery. After the removal of the ocular contents from both eyes, liquid dye was injected into the cannulated channel of the superficial temporal artery in six soft embalmed cadavers and different color dye was injected into the facial artery on both sides successively. The interior sclera was monitored for dye oozing from retrograde ophthalmic perfusion. Among all 12 globes, dye injections from the 12 superficial temporal arteries entered ipsilateral globes in three and the contralateral globe in two arteries. Dye from the facial artery was infused into five ipsilateral globes and in three contralateral globes. Dye injections of two facial arteries in the same cadaver resulted in bilateral globe staining but those of the superficial temporal arteries did not. Direct communications between the same and different arteries of the four cannulated arteries were evidenced by dye dripping from the cannulating needle hubs in 14 of 24 injected arteries. Compression of the orbital rim at the superior nasal corner retarded ocular infusion in 11 of 14 arterial injections. Under some specific conditions favoring embolism, persistent interarterial anastomoses between the face and the eye allowed filler emboli to flow into the globe causing ocular complications. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .", "title": "" }, { "docid": "db3c5c93daf97619ad927532266b3347", "text": "Car9, a dodecapeptide identified by cell surface display for its ability to bind to the edge of carbonaceous materials, also binds to silica with high affinity. The interaction can be disrupted with l-lysine or l-arginine, enabling a broad range of technological applications. Previously, we reported that C-terminal Car9 extensions support efficient protein purification on underivatized silica. Here, we show that the Car9 tag is functional and TEV protease-excisable when fused to the N-termini of target proteins, and that it supports affinity purification under denaturing conditions, albeit with reduced yields. We further demonstrate that capture of Car9-tagged proteins is enhanced on small particle size silica gels with large pores, that the concomitant problem of nonspecific protein adsorption can be solved by lysing cells in the presence of 0.3% Tween 20, and that efficient elution is achieved at reduced l-lysine concentrations under alkaline conditions. An optimized small-scale purification kit incorporating the above features allows Car9-tagged proteins to be inexpensively recovered in minutes with better than 90% purity. The Car9 affinity purification technology should prove valuable for laboratory-scale applications requiring rapid access to milligram-quantities of proteins, and for preparative scale purification schemes where cost and productivity are important factors.", "title": "" }, { "docid": "3f207c3c622d1854a7ad6c5365354db1", "text": "The field of Music Information Retrieval has always acknowledged the need for rigorous scientific evaluations, and several efforts have set out to develop and provide the infrastructure, technology and methodologies needed to carry out these evaluations. The community has enormously gained from these evaluation forums, but we have reached a point where we are stuck with evaluation frameworks that do not allow us to improve as much and as well as we want. The community recently acknowledged this problem and showed interest in addressing it, though it is not clear what to do to improve the situation. We argue that a good place to start is again the Text IR field. Based on a formalization of the evaluation process, this paper presents a survey of past evaluation work in the context of Text IR, from the point of view of validity, reliability and efficiency of the experiments. We show the problems that our community currently has in terms of evaluation, point to several lines of research to improve it and make various proposals in that line.", "title": "" }, { "docid": "84b018fa45e06755746309014854bb9a", "text": "For years, ontologies have been known in computer science as consensual models of domains of discourse, usually implemented as formal definitions of the relevant conceptual entities. Researchers have written much about the potential benefits of using them, and most of us regard ontologies as central building blocks of the semantic Web and other semantic systems. Unfortunately, the number and quality of actual, \"non-toy\" ontologies available on the Web today is remarkably low. This implies that the semantic Web community has yet to build practically useful ontologies for a lot of relevant domains in order to make the semantic Web a reality. Theoretically minded advocates often assume that the lack of ontologies is because the \"stupid business people haven't realized ontologies' enormous benefits.\" As a liberal market economist, the author assumes that humans can generally figure out what's best for their well-being, at least in the long run, and that they act accordingly. In other words, the fact that people haven't yet created as many useful ontologies as the ontology research community would like might indicate either unresolved technical limitations or the existence of sound rationales for why individuals refrain from building them - or both. Indeed, several social and technical difficulties exist that put a brake on developing and eventually constrain the space of possible ontologies", "title": "" } ]
scidocsrr
d211f8d25ed48575a3f39ca00c42ea4c
Managing Non-Volatile Memory in Database Systems
[ { "docid": "149b1f7861d55e90b1f423ff98e765ca", "text": "The advent of Storage Class Memory (SCM) is driving a rethink of storage systems towards a single-level architecture where memory and storage are merged. In this context, several works have investigated how to design persistent trees in SCM as a fundamental building block for these novel systems. However, these trees are significantly slower than DRAM-based counterparts since trees are latency-sensitive and SCM exhibits higher latencies than DRAM. In this paper we propose a novel hybrid SCM-DRAM persistent and concurrent B-Tree, named Fingerprinting Persistent Tree (FPTree) that achieves similar performance to DRAM-based counterparts. In this novel design, leaf nodes are persisted in SCM while inner nodes are placed in DRAM and rebuilt upon recovery. The FPTree uses Fingerprinting, a technique that limits the expected number of in-leaf probed keys to one. In addition, we propose a hybrid concurrency scheme for the FPTree that is partially based on Hardware Transactional Memory. We conduct a thorough performance evaluation and show that the FPTree outperforms state-of-the-art persistent trees with different SCM latencies by up to a factor of 8.2. Moreover, we show that the FPTree scales very well on a machine with 88 logical cores. Finally, we integrate the evaluated trees in memcached and a prototype database. We show that the FPTree incurs an almost negligible performance overhead over using fully transient data structures, while significantly outperforming other persistent trees.", "title": "" } ]
[ { "docid": "20436a21b4105700d7e95a477a22d830", "text": "We introduce a new type of Augmented Reality games: By using a simple webcam and Computer Vision techniques, we turn a standard real game board pawns into an AR game. We use these objects as a tangible interface, and augment them with visual effects. The game logic can be performed automatically by the computer. This results in a better immersion compared to the original board game alone and provides a different experience than a video game. We demonstrate our approach on Monopoly− [1], but it is very generic and could easily be adapted to any other board game.", "title": "" }, { "docid": "467bb4ffb877b4e21ad4f7fc7adbd4a6", "text": "In this paper, a 6 × 6 planar slot array based on a hollow substrate integrated waveguide (HSIW) is presented. To eliminate the tilting of the main beam, the slot array is fed from the centre at the back of the HSIW, which results in a blockage area. To reduce the impact on sidelobe levels, a slot extrusion technique is introduced. A simplified multiway power divider is demonstrated to feed the array elements and the optimisation procedure is described. To verify the antenna design, a 6 × 6 planar array is fabricated and measured in a low temperature co-fired ceramic (LTCC) technology. The HSIW has lower loss, comparable to standard WR28, and a high gain of 17.1 dBi at 35.5 GHz has been achieved in the HSIW slot array.", "title": "" }, { "docid": "572453e5febc5d45be984d7adb5436c5", "text": "An analysis of several role playing games indicates that player quests share common elements, and that these quests may be abstractly represented using a small expressive language. One benefit of this representation is that it can guide procedural content generation by allowing quests to be generated using this abstraction, and then later converting them into a concrete form within a game’s domain.", "title": "" }, { "docid": "539fb99a52838d6ce6f980b9b9703a2b", "text": "The Blinder-Oaxaca decomposition technique is widely used to identify and quantify the separate contributions of differences in measurable characteristics to group differences in an outcome of interest. The use of a linear probability model and the standard BlinderOaxaca decomposition, however, can provide misleading estimates when the dependent variable is binary, especially when group differences are very large for an influential explanatory variable. A simulation method of performing a nonlinear decomposition that uses estimates from a logit, probit or other nonlinear model was first developed in a Journal of Labor Economics article (Fairlie 1999). This nonlinear decomposition technique has been used in nearly a thousand subsequent studies published in a wide range of fields and disciplines. In this paper, I address concerns over path dependence in using the nonlinear decomposition technique. I also present a straightforward method of incorporating sample weights in the technique. I thank Eric Aldrich and Ben Jann for comments and suggestions, and Brandon Heck for research assistance.", "title": "" }, { "docid": "590e0965ca61223d5fefb82e89f24fd0", "text": "Large software projects contain significant code duplication, mainly due to copying and pasting code. Many techniques have been developed to identify duplicated code to enable applications such as refactoring, detecting bugs, and protecting intellectual property. Because source code is often unavailable, especially for third-party software, finding duplicated code in binaries becomes particularly important. However, existing techniques operate primarily on source code, and no effective tool exists for binaries.\n In this paper, we describe the first practical clone detection algorithm for binary executables. Our algorithm extends an existing tree similarity framework based on clustering of characteristic vectors of labeled trees with novel techniques to normalize assembly instructions and to accurately and compactly model their structural information. We have implemented our technique and evaluated it on Windows XP system binaries totaling over 50 million assembly instructions. Results show that it is both scalable and precise: it analyzed Windows XP system binaries in a few hours and produced few false positives. We believe our technique is a practical, enabling technology for many applications dealing with binary code.", "title": "" }, { "docid": "a4a15096e116a6afc2730d1693b1c34f", "text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.", "title": "" }, { "docid": "82234158dc94216222efa5f80eee0360", "text": "We investigate the possibility to prove security of the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model, i.e., without random oracles. We subsume these schemes under a more general class of blind signature schemes and show that finding security proofs for these schemes via black-box reductions in the standard model is hard. Technically, our result deploys meta-reduction techniques showing that black-box reductions for such schemes could be turned into efficient solvers for hard non-interactive cryptographic problems like RSA or discrete-log. Our technique yields significantly stronger impossibility results than previous meta-reductions in other settings by playing off the two security requirements of the blind signatures (unforgeability and blindness).", "title": "" }, { "docid": "d0985c38f3441ca0d69af8afaf67c998", "text": "In this paper we discuss the importance of ambiguity, uncertainty and limited information on individuals’ decision making in situations that have an impact on their privacy. We present experimental evidence from a survey study that demonstrates the impact of framing a marketing offer on participants’ willingness to accept when the consequences of the offer are uncertain and highly ambiguous.", "title": "" }, { "docid": "96c1f90ff04e7fd37d8b8a16bc4b9c54", "text": "Graph triangulation, which finds all triangles in a graph, has been actively studied due to its wide range of applications in the network analysis and data mining. With the rapid growth of graph data size, disk-based triangulation methods are in demand but little researched. To handle a large-scale graph which does not fit in memory, we must iteratively load small parts of the graph. In the existing literature, achieving the ideal cost has been considered to be impossible for billion-scale graphs due to the memory size constraint. In this paper, we propose an overlapped and parallel disk-based triangulation framework for billion-scale graphs, OPT, which achieves the ideal cost by (1) full overlap of the CPU and I/O operations and (2) full parallelism of multi-core CPU and FlashSSD I/O. In OPT, triangles in memory are called the internal triangles while triangles constituting vertices in memory and vertices in external memory are called the external triangles. At the macro level, OPT overlaps the internal triangulation and the external triangulation, while it overlaps the CPU and I/O operations at the micro level. Thereby, the cost of OPT is close to the ideal cost. Moreover, OPT instantiates both vertex-iterator and edge-iterator models and benefits from multi-thread parallelism on both types of triangulation. Extensive experiments conducted on large-scale datasets showed that (1) OPT achieved the elapsed time close to that of the ideal method with less than 7% of overhead under the limited memory budget, (2) OPT achieved linear speed-up with an increasing number of CPU cores, (3) OPT outperforms the state-of-the-art parallel method by up to an order of magnitude with 6 CPU cores, and (4) for the first time in the literature, the triangulation results are reported for a billion-vertex scale real-world graph.", "title": "" }, { "docid": "6a33013c19dc59d8871e217461d479e9", "text": "Cancer tissues in histopathology images exhibit abnormal patterns; it is of great clinical importance to label a histopathology image as having cancerous regions or not and perform the corresponding image segmentation. However, the detailed annotation of cancer cells is often an ambiguous and challenging task. In this paper, we propose a new learning method, multiple clustered instance learning (MCIL), to classify, segment and cluster cancer cells in colon histopathology images. The proposed MCIL method simultaneously performs image-level classification (cancer vs. non-cancer image), pixel-level segmentation (cancer vs. non-cancer tissue), and patch-level clustering (cancer subclasses). We embed the clustering concept into the multiple instance learning (MIL) setting and derive a principled solution to perform the above three tasks in an integrated framework. Experimental results demonstrate the efficiency and effectiveness of MCIL in analyzing colon cancers.", "title": "" }, { "docid": "b32286014bb7105e62fba85a9aab9019", "text": "PURPOSE\nSystemic thrombolysis for the treatment of acute pulmonary embolism (PE) carries an estimated 20% risk of major hemorrhage, including a 3%-5% risk of hemorrhagic stroke. The authors used evidence-based methods to evaluate the safety and effectiveness of modern catheter-directed therapy (CDT) as an alternative treatment for massive PE.\n\n\nMATERIALS AND METHODS\nThe systematic review was initiated by electronic literature searches (MEDLINE, EMBASE) for studies published from January 1990 through September 2008. Inclusion criteria were applied to select patients with acute massive PE treated with modern CDT. Modern techniques were defined as the use of low-profile devices (< or =10 F), mechanical fragmentation and/or aspiration of emboli including rheolytic thrombectomy, and intraclot thrombolytic injection if a local drug was infused. Relevant non-English language articles were translated into English. Paired reviewers assessed study quality and abstracted data. Meta-analysis was performed by using random effects models to calculate pooled estimates for complications and clinical success rates across studies. Clinical success was defined as stabilization of hemodynamics, resolution of hypoxia, and survival to hospital discharge.\n\n\nRESULTS\nFive hundred ninety-four patients from 35 studies (six prospective, 29 retrospective) met the criteria for inclusion. The pooled clinical success rate from CDT was 86.5% (95% confidence interval [CI]: 82.1%, 90.2%). Pooled risks of minor and major procedural complications were 7.9% (95% CI: 5.0%, 11.3%) and 2.4% (95% CI: 1.9%, 4.3%), respectively. Data on the use of systemic thrombolysis before CDT were available in 571 patients; 546 of those patients (95%) were treated with CDT as the first adjunct to heparin without previous intravenous thrombolysis.\n\n\nCONCLUSIONS\nModern CDT is a relatively safe and effective treatment for acute massive PE. At experienced centers, CDT should be considered as a first-line treatment for patients with massive PE.", "title": "" }, { "docid": "1ee444fda98b312b0462786f5420f359", "text": "After years of banning consumer devices (e.g., iPads and iPhone) and applications (e.g., DropBox, Evernote, iTunes) organizations are allowing employees to use their consumer tools in the workplace. This IT consumerization phenomenon will have serious consequences on IT departments which have historically valued control, security, standardization and support (Harris et al. 2012). Based on case studies of three organizations in different stages of embracing IT consumerization, this study identifies the conflicts IT consumerization creates for IT departments. All three organizations experienced similar goal and behavior conflicts, while identity conflict varied depending upon the organizations’ stage implementing consumer tools (e.g., embryonic, initiating or institutionalized). Theoretically, this study advances IT consumerization research by applying a role conflict perspective to understand consumerization’s impact on the IT department.", "title": "" }, { "docid": "da9432171ceba5ae76fa76a8416b1a8f", "text": "Social tagging on online portals has become a trend now. It has emerged as one of the best ways of associating metadata with web objects. With the increase in the kinds of web objects becoming available, collaborative tagging of such objects is also developing along new dimensions. This popularity has led to a vast literature on social tagging. In this survey paper, we would like to summarize different techniques employed to study various aspects of tagging. Broadly, we would discuss about properties of tag streams, tagging models, tag semantics, generating recommendations using tags, visualizations of tags, applications of tags and problems associated with tagging usage. We would discuss topics like why people tag, what influences the choice of tags, how to model the tagging process, kinds of tags, different power laws observed in tagging domain, how tags are created, how to choose the right tags for recommendation, etc. We conclude with thoughts on future work in the area.", "title": "" }, { "docid": "318aa0dab44cca5919100033aa692cd9", "text": "Text classification is one of the important research issues in the field of text mining, where the documents are classified with supervised knowledge. In literature we can find many text representation schemes and classifiers/learning algorithms used to classify text documents to the predefined categories. In this paper, we present various text representation schemes and compare different classifiers used to classify text documents to the predefined classes. The existing methods are compared and contrasted based on qualitative parameters viz., criteria used for classification, algorithms adopted and classification time complexities.", "title": "" }, { "docid": "709853992cae8d5b5fa4c3cc86d898a7", "text": "The rise of big data age in the Internet has led to the explosive growth of data size. However, trust issue has become the biggest problem of big data, leading to the difficulty in data safe circulation and industry development. The blockchain technology provides a new solution to this problem by combining non-tampering, traceable features with smart contracts that automatically execute default instructions. In this paper, we present a credible big data sharing model based on blockchain technology and smart contract to ensure the safe circulation of data resources.", "title": "" }, { "docid": "c5f521d5e5e089261914f6784e2d77da", "text": "Generating structured query language (SQL) from natural language is an emerging research topic. This paper presents a new learning paradigm from indirect supervision of the answers to natural language questions, instead of SQL queries. This paradigm facilitates the acquisition of training data due to the abundant resources of question-answer pairs for various domains in the Internet, and expels the difficult SQL annotation job. An endto-end neural model integrating with reinforcement learning is proposed to learn SQL generation policy within the answerdriven learning paradigm. The model is evaluated on datasets of different domains, including movie and academic publication. Experimental results show that our model outperforms the baseline models.", "title": "" }, { "docid": "0ccfbd8f2b8979ec049d94fa6dddf614", "text": "Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular projectbased instruction. No significant differences were found between the two groups with respect to motivation for History or the MiddleAges. The impact of location-based technology and gamebased learning on pupil knowledge and motivation are discussed along with suggestions for future research.", "title": "" }, { "docid": "9415adaa3ec2f7873a23cc2017a2f1ee", "text": "In this paper we introduce a new unsupervised reinforcement learning method for discovering the set of intrinsic options available to an agent. This set is learned by maximizing the number of different states an agent can reliably reach, as measured by the mutual information between the set of options and option termination states. To this end, we instantiate two policy gradient based algorithms, one that creates an explicit embedding space of options and one that represents options implicitly. The algorithms also provide an explicit measure of empowerment in a given state that can be used by an empowerment maximizing agent. The algorithm scales well with function approximation and we demonstrate the applicability of the algorithm on a range of tasks.", "title": "" }, { "docid": "50875a63d0f3e1796148d809b5673081", "text": "Coreference resolution seeks to find the mentions in text that refer to the same real-world entity. This task has been well-studied in NLP, but until recent years, empirical results have been disappointing. Recent research has greatly improved the state-of-the-art. In this review, we focus on five papers that represent the current state-ofthe-art and discuss how they relate to each other and how these advances will influence future work in this area.", "title": "" } ]
scidocsrr
ef9f48caaba38c29329650121b2ef6c8
Predictive role of prenasal thickness and nasal bone for Down syndrome in the second trimester.
[ { "docid": "e7315716a56ffa7ef2461c7c99879efb", "text": "OBJECTIVE\nTo investigate the potential value of ultrasound examination of the fetal profile for present/hypoplastic fetal nasal bone at 15-22 weeks' gestation as a marker for trisomy 21.\n\n\nMETHODS\nThis was an observational ultrasound study in 1046 singleton pregnancies undergoing amniocentesis for fetal karyotyping at 15-22 (median, 17) weeks' gestation. Immediately before amniocentesis the fetal profile was examined to determine if the nasal bone was present or hypoplastic (absent or shorter than 2.5 mm). The incidence of nasal hypoplasia in the trisomy 21 and the chromosomally normal fetuses was determined and the likelihood ratio for trisomy 21 for nasal hypoplasia was calculated.\n\n\nRESULTS\nAll fetuses were successfully examined for the presence of the nasal bone. The nasal bone was hypoplastic in 21/34 (61.8%) fetuses with trisomy 21, in 12/982 (1.2%) chromosomally normal fetuses and in 1/30 (3.3%) fetuses with other chromosomal defects. In 3/21 (14.3%) trisomy 21 fetuses with nasal hypoplasia there were no other abnormal ultrasound findings. In the chromosomally normal group hypoplastic nasal bone was found in 0.5% of Caucasians and in 8.8% of Afro-Caribbeans. The likelihood ratio for trisomy 21 for hypoplastic nasal bone was 50.5 (95% CI 27.1-92.7) and for present nasal bone it was 0.38 (95% CI 0.24-0.56).\n\n\nCONCLUSION\nNasal bone hypoplasia at the 15-22-week scan is associated with a high risk for trisomy 21 and it is a highly sensitive and specific marker for this chromosomal abnormality.", "title": "" } ]
[ { "docid": "2adf5e06cfc7e6d8cf580bdada485a23", "text": "This paper describes the comprehensive Terrorism Knowledge Base TM (TKB TM) which will ultimately contain all relevant knowledge about terrorist groups, their members, leaders, affiliations , etc., and full descriptions of specific terrorist events. Led by world-class experts in terrorism , knowledge enterers have, with simple tools, been building the TKB at the rate of up to 100 assertions per person-hour. The knowledge is stored in a manner suitable for computer understanding and reasoning. The TKB also utilizes its reasoning modules to integrate data and correlate observations, generate scenarios, answer questions and compose explanations.", "title": "" }, { "docid": "87133250a9e04fd42f5da5ecacd39d70", "text": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher.", "title": "" }, { "docid": "cd0c1507c1187e686c7641388413d3b5", "text": "Inference of three-dimensional motion from the fusion of inertial and visual sensory data has to contend with the preponderance of outliers in the latter. Robust filtering deals with the joint inference and classification task of selecting which data fits the model, and estimating its state. We derive the optimal discriminant and propose several approximations, some used in the literature, others new. We compare them analytically, by pointing to the assumptions underlying their approximations, and empirically. We show that the best performing method improves the performance of state-of-the-art visual-inertial sensor fusion systems, while retaining the same computational complexity.", "title": "" }, { "docid": "7e683f15580e77b1e207731bb73b8107", "text": "The skeleton is essential for general shape representation. The commonly required properties of a skeletonization algorithm are that the extracted skeleton should be accurate; robust to noise, position and rotation; able to reconstruct the original object; and able to produce a connected skeleton in order to preserve its topological and hierarchical properties. However, the use of a discrete image presents a lot of problems that may in9uence the extraction of the skeleton. Moreover, most of the methods are memory-intensive and computationally intensive, and require a complex data structure. In this paper, we propose a fast, e;cient and accurate skeletonization method for the extraction of a well-connected Euclidean skeleton based on a signed sequential Euclidean distance map. A connectivity criterion is proposed, which can be used to determine whether a given pixel is a skeleton point independently. The criterion is based on a set of point pairs along the object boundary, which are the nearest contour points to the pixel under consideration and its 8 neighbors. Our proposed method generates a connected Euclidean skeleton with a single pixel width without requiring a linking algorithm or iteration process. Experiments show that the runtime of our algorithm is faster than the distance transformation and is linearly proportional to the number of pixels of an image. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "f2b6afabd67354280d091d11e8265b96", "text": "This paper aims to present three new methods for color detection and segmentation of road signs. The images are taken by a digital camera mounted in a car. The RGB images are converted into IHLS color space, and new methods are applied to extract the colors of the road signs under consideration. The methods are tested on hundreds of outdoor images in different light conditions, and they show high robustness. This project is part of the research taking place in Dalarna University/Sweden in the field of the ITS", "title": "" }, { "docid": "8f289714182c490b726b8edbbb672cfd", "text": "Design and implementation of a 15kV sub-nanosecond pulse generator using Trigatron type spark gap as a switch. Straightforward and compact trigger generator using pulse shaping network which produces a trigger pulse of sub-nanosecond rise time. A pulse power system requires delivering a high voltage, high coulomb in short rise time. This is achieved by using pulse shaping network comprises of parallel combinations of capacitors and inductor. Spark gap switches are used to switch the energy from capacitive source to inductive load. The pulse hence generated can be used for synchronization of two or more spark gap. Because of the fast rise time and the high output voltage, the reliability of the synchronization is increased. The analytical calculations, simulation, have been carried out to select the circuit parameters. Simulation results using MATLAB/SIMULINK have been implemented in the experimental setup and sub-nanoseconds output waveforms have been obtained.", "title": "" }, { "docid": "874b14b3c3e15b43de3310327affebaf", "text": "We present the Accelerated Quadratic Proxy (AQP) - a simple first-order algorithm for the optimization of geometric energies defined over triangular and tetrahedral meshes.\n The main stumbling block of current optimization techniques used to minimize geometric energies over meshes is slow convergence due to ill-conditioning of the energies at their minima. We observe that this ill-conditioning is in large part due to a Laplacian-like term existing in these energies. Consequently, we suggest to locally use a quadratic polynomial proxy, whose Hessian is taken to be the Laplacian, in order to achieve a preconditioning effect. This already improves stability and convergence, but more importantly allows incorporating acceleration in an almost universal way, that is independent of mesh size and of the specific energy considered.\n Experiments with AQP show it is rather insensitive to mesh resolution and requires a nearly constant number of iterations to converge; this is in strong contrast to other popular optimization techniques used today such as Accelerated Gradient Descent and Quasi-Newton methods, e.g., L-BFGS. We have tested AQP for mesh deformation in 2D and 3D as well as for surface parameterization, and found it to provide a considerable speedup over common baseline techniques.", "title": "" }, { "docid": "c7ea816f2bb838b8c5aac3cdbbd82360", "text": "Semantic annotated parallel corpora, though rare, play an increasingly important role in natural language processing. These corpora provide valuable data for computational tasks like sense-based machine translation and word sense disambiguation, but also to contrastive linguistics and translation studies. In this paper we present the ongoing development of a web-based corpus semantic annotation environment that uses the Open Multilingual Wordnet (Bond and Foster, 2013) as a sense inventory. The system includes interfaces to help coordinating the annotation project and a corpus browsing interface designed specifically to meet the needs of a semantically annotated corpus. The tool was designed to build the NTU-Multilingual Corpus (Tan and Bond, 2012). For the past six years, our tools have been tested and developed in parallel with the semantic annotation of a portion of this corpus in Chinese, English, Japanese and Indonesian. The annotation system is released under an open source license (MIT).", "title": "" }, { "docid": "933312292c64c916e69357c5aec42189", "text": "Augmented reality annotations and virtual scene navigation add new dimensions to remote collaboration. In this paper, we present a touchscreen interface for creating freehand drawings as world-stabilized annotations and for virtually navigating a scene reconstructed live in 3D, all in the context of live remote collaboration. Two main focuses of this work are (1) automatically inferring depth for 2D drawings in 3D space, for which we evaluate four possible alternatives, and (2) gesture-based virtual navigation designed specifically to incorporate constraints arising from partially modeled remote scenes. We evaluate these elements via qualitative user studies, which in addition provide insights regarding the design of individual visual feedback elements and the need to visualize the direction of drawings.", "title": "" }, { "docid": "4a043a02f3fad07797245b0a2c4ea4c5", "text": "The worldwide population of people over the age of 65 has been predicted to more than double from 1990 to 2025. Therefore, ubiquitous health-care systems have become an important topic of research in recent years. In this paper, an integrated system for portable electrocardiography (ECG) monitoring, with an on-board processor for time–frequency analysis of heart rate variability (HRV), is presented. The main function of proposed system comprises three parts, namely, an analog-to-digital converter (ADC) controller, an HRV processor, and a lossless compression engine. At the beginning, ECG data acquired from front-end circuits through the ADC controller is passed through the HRV processor for analysis. Next, the HRV processor performs real-time analysis of time–frequency HRV using the Lomb periodogram and a sliding window configuration. The Lomb periodogram is suited for spectral analysis of unevenly sampled data and has been applied to time–frequency analysis of HRV in the proposed system. Finally, the ECG data are compressed by 2.5 times using the lossless compression engine before output using universal asynchronous receiver/transmitter (UART). Bluetooth is employed to transmit analyzed HRV data and raw ECG data to a remote station for display or further analysis. The integrated ECG health-care system design proposed has been implemented using UMC 90 nm CMOS technology. 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "6eb229b17a4634183818ff4a15f981b6", "text": "Fine-grained image classification is a challenging task due to the large intra-class variance and small inter-class variance, aiming at recognizing hundreds of sub-categories belonging to the same basic-level category. Most existing fine-grained image classification methods generally learn part detection models to obtain the semantic parts for better classification accuracy. Despite achieving promising results, these methods mainly have two limitations: (1) not all the parts which obtained through the part detection models are beneficial and indispensable for classification, and (2) fine-grained image classification requires more detailed visual descriptions which could not be provided by the part locations or attribute annotations. For addressing the above two limitations, this paper proposes the two-stream model combing vision and language (CVL) for learning latent semantic representations. The vision stream learns deep representations from the original visual information via deep convolutional neural network. The language stream utilizes the natural language descriptions which could point out the discriminative parts or characteristics for each image, and provides a flexible and compact way of encoding the salient visual aspects for distinguishing sub-categories. Since the two streams are complementary, combing the two streams can further achieves better classification accuracy. Comparing with 12 state-of-the-art methods on the widely used CUB-200-2011 dataset for fine-grained image classification, the experimental results demonstrate our CVL approach achieves the best performance.", "title": "" }, { "docid": "06675c4b42683181cecce7558964c6b6", "text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.", "title": "" }, { "docid": "0d9057d8a40eb8faa7e67128a7d24565", "text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.", "title": "" }, { "docid": "c0b30475f78acefae1c15f9f5d6dc57b", "text": "Traditionally, autonomous cars make predictions about other drivers’ future trajectories, and plan to stay out of their way. This tends to result in defensive and opaque behaviors. Our key insight is that an autonomous car’s actions will actually affect what other cars will do in response, whether the car is aware of it or not. Our thesis is that we can leverage these responses to plan more efficient and communicative behaviors. We model the interaction between an autonomous car and a human driver as a dynamical system, in which the robot’s actions have immediate consequences on the state of the car, but also on human actions. We model these consequences by approximating the human as an optimal planner, with a reward function that we acquire through Inverse Reinforcement Learning. When the robot plans with this reward function in this dynamical system, it comes up with actions that purposefully change human state: it merges in front of a human to get them to slow down or to reach its own goal faster; it blocks two lanes to get them to switch to a third lane; or it backs up slightly at an intersection to get them to proceed first. Such behaviors arise from the optimization, without relying on hand-coded signaling strategies and without ever explicitly modeling communication. Our user study results suggest that the robot is indeed capable of eliciting desired changes in human state by planning using this dynamical system.", "title": "" }, { "docid": "898ff77dbfaf00efa3b08779a781aa0b", "text": "The monumental cost of health care, especially for chronic disease treatment, is quickly becoming unmanageable. This crisis has motivated the drive towards preventative medicine, where the primary concern is recognizing disease risk and taking action at the earliest signs. However, universal testing is neither time nor cost efficient. We propose CARE, a Collaborative Assessment and Recommendation Engine, which relies only on a patient's medical history using ICD-9-CM codes in order to predict future diseases risks. CARE uses collaborative filtering to predict each patient's greatest disease risks based on their own medical history and that of similar patients. We also describe an Iterative version, ICARE, which incorporates ensemble concepts for improved performance. These novel systems require no specialized information and provide predictions for medical conditions of all kinds in a single run. We present experimental results on a Medicare dataset, demonstrating that CARE and ICARE perform well at capturing future disease risks.", "title": "" }, { "docid": "bf4b6cd15c0b3ddb5892f1baea9dec68", "text": "The purpose of this study was to examine the distribution, abundance and characteristics of plastic particles in plankton samples collected routinely in Northeast Pacific ecosystems, and to contribute to the development of ideas for future research into the occurrence and impact of small plastic debris in marine pelagic ecosystems. Plastic debris particles were assessed from zooplankton samples collected as part of the National Oceanic and Atmospheric Administration's (NOAA) ongoing ecosystem surveys during two research cruises in the Southeast Bering Sea in the spring and fall of 2006 and four research cruises off the U.S. west coast (primarily off southern California) in spring, summer and fall of 2006, and in January of 2007. Nets with 0.505 mm mesh were used to collect surface samples during all cruises, and sub-surface samples during the four cruises off the west coast. The 595 plankton samples processed indicate that plastic particles are widely distributed in surface waters. The proportion of surface samples from each cruise that contained particles of plastic ranged from 8.75 to 84.0%, whereas particles were recorded in sub-surface samples from only one cruise (in 28.2% of the January 2007 samples). Spatial and temporal variability was apparent in the abundance and distribution of the plastic particles and mean standardized quantities varied among cruises with ranges of 0.004-0.19 particles/m³, and 0.014-0.209 mg dry mass/m³. Off southern California, quantities for the winter cruise were significantly higher, and for the spring cruise significantly lower than for the summer and fall surveys (surface data). Differences between surface particle concentrations and mass for the Bering Sea and California coast surveys were significant for pair-wise comparisons of the spring but not the fall cruises. The particles were assigned to three plastic product types: product fragments, fishing net and line fibers, and industrial pellets; and five size categories: <1 mm, 1-2.5 mm, >2.5-5 mm, >5-10 mm, and >10 mm. Product fragments accounted for the majority of the particles, and most were less than 2.5 mm in size. The ubiquity of such particles in the survey areas and predominance of sizes <2.5 mm implies persistence in these pelagic ecosystems as a result of continuous breakdown from larger plastic debris fragments, and widespread distribution by ocean currents. Detailed investigations of the trophic ecology of individual zooplankton species, and their encounter rates with various size ranges of plastic particles in the marine pelagic environment, are required in order to understand the potential for ingestion of such debris particles by these organisms. Ongoing plankton sampling programs by marine research institutes in large marine ecosystems are good potential sources of data for continued assessment of the abundance, distribution and potential impact of small plastic debris in productive coastal pelagic zones.", "title": "" }, { "docid": "0fe02fcc6f68ba1563d3f5d96a8da330", "text": "We present a novel technique for jointly predicting semantic arguments for lexical predicates. The task is to find the best matching between semantic roles and sentential spans, subject to structural constraints that come from expert linguistic knowledge (e.g., in the FrameNet lexicon). We formulate this task as an integer linear program (ILP); instead of using an off-the-shelf tool to solve the ILP, we employ a dual decomposition algorithm, which we adapt for exact decoding via a branch-and-bound technique. Compared to a baseline that makes local predictions, we achieve better argument identification scores and avoid all structural violations. Runtime is nine times faster than a proprietary ILP solver.", "title": "" }, { "docid": "e1b6cc1dbd518760c414cd2ddbe88dd5", "text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Mind the Traps! Design Guidelines for Rigorous BCI Experiments Camille Jeunet, Stefan Debener, Fabien Lotte, Jeremie Mattout, Reinhold Scherer, Catharina Zich", "title": "" }, { "docid": "8cbe0ff905a58e575f2d84e4e663a857", "text": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, there is only barely a few working on the privacy and security implications of this technology. Œis survey paper aims to put in to light these risks, and to look into the latest security and privacy work on MR. Speci€cally, we list and review the di‚erent protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality (AR), virtual reality (VR), and human-computer interaction (HCI) as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-Œings (IoT). We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed.", "title": "" } ]
scidocsrr
01c3e01d851d2eea8a3d24dcf1cc9afa
New prototype of hybrid 3D-biometric facial recognition system
[ { "docid": "573f12acd3193045104c7d95bbc89f78", "text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.", "title": "" } ]
[ { "docid": "ac29d60761976a263629a93167516fde", "text": "Abstruct1-V power supply high-speed low-power digital circuit technology with 0.5-pm multithreshold-voltage CMOS (MTCMOS) is proposed. This technology features both lowthreshold voltage and high-threshold voltage MOSFET’s in a single LSI. The low-threshold voltage MOSFET’s enhance speed Performance at a low supply voltage of 1 V or less, while the high-threshold voltage MOSFET’s suppress the stand-by leakage current during the sleep period. This technology has brought about logic gate characteristics of a 1.7-11s propagation delay time and 0.3-pW/MHz/gate power dissipation with a standard load. In addition, an MTCMOS standard cell library has been developed so that conventional CAD tools can be used to lay out low-voltage LSI’s. To demonstrate MTCMOS’s effectiveness, a PLL LSI based on standard cells was designed as a carrying vehicle. 18-MHz operation at 1 V was achieved using a 0.5-pm CMOS process.", "title": "" }, { "docid": "d63591706309cf602404c34de547184f", "text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.", "title": "" }, { "docid": "3ea6de664a7ac43a1602b03b46790f0a", "text": "After reviewing the design of a class of lowpass recursive digital filters having integer multiplier and linear phase characteristics, the possibilities for extending the class to include high pass, bandpass, and bandstop (‘notch’) filters are described. Experience with a PDP 11 computer has shown that these filters may be programmed simply using machine code, and that online operation at sampling rates up to about 8 kHz is possible. The practical application of such filters is illustrated by using a notch desgin to remove mains-frequency interference from an e.c.g. waveform. Après avoir passé en revue la conception d'un type de filtres digitaux récurrents passe-bas à multiplicateurs incorporés et à caractéristiques de phase linéaires, cet article décrit les possibilités d'extension de ce type aux filtres, passe-haut, passe-bande et à élimination de bande. Une expérience menée avec un ordinateur PDP 11 a indiqué que ces filtres peuvent être programmés de manière simple avec un code machine, et qu'il est possible d'effectuer des opérations en ligne avec des taux d'échantillonnage jusqu'à environ 8 kHz. L'application pratique de tels filtres est illustrée par un exemple dans lequel un filtre à élimination de bande est utilisé pour éliminer les interférences due à la fréquence du courant d'alimentation dans un tracé d'e.c.g. Nach einer Untersuchung der Konstruktion einer Gruppe von Rekursivdigitalfiltern mit niedrigem Durchlässigkeitsbereich und mit ganzzahligen Multipliziereinrichtungen und Linearphaseneigenschaften werden die Möglichkeiten beschrieben, die Gruppe so zu erweitern, daß sie Hochfilter, Bandpaßfilter und Bandstopfilter (“Kerbfilter”) einschließt. Erfahrungen mit einem PDP 11-Computer haben gezeigt, daß diese Filter auf einfache Weise unter Verwendung von Maschinenkode programmiert werden können und daß On-Line-Betrieb bei Entnahmegeschwindigkeiten von bis zu 8 kHz möglich ist. Die praktische Anwendung solcher Filter wird durch Verwendung einer Kerbkonstruktion zur Ausscheidung von Netzfrequenzstörungen von einer ECG-Wellenform illustriert.", "title": "" }, { "docid": "5d21df36697616719bcc3e0ee22a08bd", "text": "In spite of the significant recent progress, the incorporation of haptics into virtual environments is still in its infancy due to limitations in the hardware, the cost of development, as well as the level of reality they provide. Nonetheless, we believe that the field will one day be one of the groundbreaking media of the future. It has its current holdups but the promise of the future is worth the wait. The technology is becoming cheaper and applications are becoming more forthcoming and apparent. If we can survive this infancy, it will promise to be an amazing revolution in the way we interact with computers and the virtual world. The researchers organize the rapidly increasing multidisciplinary research of haptics into four subareas: human haptics, machine haptics, computer haptics, and multimedia haptics", "title": "" }, { "docid": "4c12d10fd9c2a12e56b56f62f99333f3", "text": "The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders.", "title": "" }, { "docid": "705b2a837b51ac5e354e1ec0df64a52a", "text": "BACKGROUND\nGeneralized anxiety disorder (GAD) is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR) to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD.\n\n\nMETHODS/DESIGN\nThe trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients): (1) the VR group, (2) the non-VR group and (3) the waiting list (WL) group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables.\n\n\nCONCLUSION\nWe argue that the use of VR for relaxation represents a promising approach in the treatment of GAD since it enhances the quality of the relaxing experience through the elicitation of the sense of presence. This controlled trial will be able to evaluate the effects of the use of VR in relaxation while preserving the benefits of randomization to reduce bias.\n\n\nTRIAL REGISTRATION\nNCT00602212 (ClinicalTrials.gov).", "title": "" }, { "docid": "2549177f9367d5641a7fc4dfcfaf5c0a", "text": "Educational data mining is an emerging trend, concerned with developing methods for exploring the huge data that come from the educational system. This data is used to derive the knowledge which is useful in decision making. EDM methods are useful to measure the performance of students, assessment of students and study students’ behavior etc. In recent years, Educational data mining has proven to be more successful at many of the educational statistics problems due to enormous computing power and data mining algorithms. This paper surveys the history and applications of data mining techniques in the educational field. The objective is to introduce data mining to traditional educational system, web-based educational system, intelligent tutoring system, and e-learning. This paper describes how to apply the main data mining methods such as prediction, classification, relationship mining, clustering, and", "title": "" }, { "docid": "9b7ca6e8b7bf87ef61e70ab4c720ec40", "text": "The support vector machine (SVM) is a widely used tool in classification problems. The SVM trains a classifier by solving an optimization problem to decide which instances of the training data set are support vectors, which are the necessarily informative instances to form the SVM classifier. Since support vectors are intact tuples taken from the training data set, releasing the SVM classifier for public use or shipping the SVM classifier to clients will disclose the private content of support vectors. This violates the privacy-preserving requirements for some legal or commercial reasons. The problem is that the classifier learned by the SVM inherently violates the privacy. This privacy violation problem will restrict the applicability of the SVM. To the best of our knowledge, there has not been work extending the notion of privacy preservation to tackle this inherent privacy violation problem of the SVM classifier. In this paper, we exploit this privacy violation problem, and propose an approach to postprocess the SVM classifier to transform it to a privacy-preserving classifier which does not disclose the private content of support vectors. The postprocessed SVM classifier without exposing the private content of training data is called Privacy-Preserving SVM Classifier (abbreviated as PPSVC). The PPSVC is designed for the commonly used Gaussian kernel function. It precisely approximates the decision function of the Gaussian kernel SVM classifier without exposing the sensitive attribute values possessed by support vectors. By applying the PPSVC, the SVM classifier is able to be publicly released while preserving privacy. We prove that the PPSVC is robust against adversarial attacks. The experiments on real data sets show that the classification accuracy of the PPSVC is comparable to the original SVM classifier.", "title": "" }, { "docid": "e6c32d3fd1bdbfb2cc8742c9b670ce97", "text": "A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.", "title": "" }, { "docid": "641811eac0e8a078cf54130c35fd6511", "text": "Multi-label text classification (MLTC) aims to assign multiple labels to each sample in the dataset. The labels usually have internal correlations. However, traditional methods tend to ignore the correlations between labels. In order to capture the correlations between labels, the sequence-tosequence (Seq2Seq) model views the MLTC task as a sequence generation problem, which achieves excellent performance on this task. However, the Seq2Seq model is not suitable for the MLTC task in essence. The reason is that it requires humans to predefine the order of the output labels, while some of the output labels in the MLTC task are essentially an unordered set rather than an ordered sequence. This conflicts with the strict requirement of the Seq2Seq model for the label order. In this paper, we propose a novel sequence-toset framework utilizing deep reinforcement learning, which not only captures the correlations between labels, but also reduces the dependence on the label order. Extensive experimental results show that our proposed method outperforms the competitive baselines by a large margin.", "title": "" }, { "docid": "23bf81699add38814461d5ac3e6e33db", "text": "This paper examined a steering behavior based fatigue monitoring system. The advantages of using steering behavior for detecting fatigue are that these systems measure continuously, cheaply, non-intrusively, and robustly even under extremely demanding environmental conditions. The expected fatigue induced changes in steering behavior are a pattern of slow drifting and fast corrective counter steering. Using advanced signal processing procedures for feature extraction, we computed 3 feature set in the time, frequency and state space domain (a total number of 1251 features) to capture fatigue impaired steering patterns. Each feature set was separately fed into 5 machine learning methods (e.g. Support Vector Machine, K-Nearest Neighbor). The outputs of each single classifier were combined to an ensemble classification value. Finally we combined the ensemble values of 3 feature subsets to a of meta-ensemble classification value. To validate the steering behavior analysis, driving samples are taken from a driving simulator during a sleep deprivation study (N=12). We yielded a recognition rate of 86.1% in classifying slight from strong fatigue.", "title": "" }, { "docid": "f6dd10d4b400234a28b221d0527e71c0", "text": "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English–German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English– Romanian.", "title": "" }, { "docid": "6fad371eecbb734c1e54b8fb9ae218c4", "text": "Quantitative Susceptibility Mapping (QSM) is a novel MRI based technique that relies on estimates of the magnetic field distribution in the tissue under examination. Several sophisticated data processing steps are required to extract the magnetic field distribution from raw MRI phase measurements. The objective of this review article is to provide a general overview and to discuss several underlying assumptions and limitations of the pre-processing steps that need to be applied to MRI phase data before the final field-to-source inversion can be performed. Beginning with the fundamental relation between MRI signal and tissue magnetic susceptibility this review covers the reconstruction of magnetic field maps from multi-channel phase images, background field correction, and provides an overview of state of the art QSM solution strategies.", "title": "" }, { "docid": "13bd6515467934ba7855f981fd4f1efd", "text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.", "title": "" }, { "docid": "f28170dcc3c4949c27ee609604c53bc2", "text": "Debates over Cannabis sativa L. and C. indica Lam. center on their taxonomic circumscription and rank. This perennial puzzle has been compounded by the viral spread of a vernacular nomenclature, “Sativa” and “Indica,” which does not correlate with C. sativa and C. indica. Ambiguities also envelop the epithets of wild-type Cannabis: the spontanea versus ruderalis debate (i.e., vernacular “Ruderalis”), as well as another pair of Cannabis epithets, afghanica and kafirstanica. To trace the rise of vernacular nomenclature, we begin with the protologues (original descriptions, synonymies, type specimens) of C. sativa and C. indica. Biogeographical evidence (obtained from the literature and herbarium specimens) suggests 18th–19th century botanists were biased in their assignment of these taxa to field specimens. This skewed the perception of Cannabis biodiversity and distribution. The development of vernacular “Sativa,” “Indica,” and “Ruderalis” was abetted by twentieth century botanists, who ignored original protologues and harbored their own cultural biases. Predominant taxonomic models by Vavilov, Small, Schultes, de Meijer, and Hillig are compared and critiqued. Small’s model adheres closest to protologue data (with C. indica treated as a subspecies). “Sativa” and “Indica” are subpopulations of C. sativa subsp. indica; “Ruderalis” represents a protean assortment of plants, including C. sativa subsp. sativa and recent hybrids.", "title": "" }, { "docid": "c0a75bf3a2d594fb87deb7b9f58a8080", "text": "For WikiText-103 we swept over LSTM hidden sizes {1024, 2048, 4096}, no. LSTM layers {1, 2}, embedding dropout {0, 0.1, 0.2, 0.3}, use of layer norm (Ba et al., 2016b) {True,False}, and whether to share the input/output embedding parameters {True,False} totalling 96 parameters. A single-layer LSTM with 2048 hidden units with tied embedding parameters and an input dropout rate of 0.3 was selected, and we used this same model configuration for the other language corpora. We trained the models on 8 P100 Nvidia GPUs by splitting the batch size into 8 sub-batches, sending them to each GPU and summing the resulting gradients. The total batch size used was 512 and a sequence length of 100 was chosen. Gradients were clipped to a maximum norm value of 0.1. We did not pass the state of the LSTM between sequences during training, however the state is passed during evaluation.", "title": "" }, { "docid": "bd9f584e7dbc715327b791e20cd20aa9", "text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.", "title": "" }, { "docid": "ab97caed9c596430c3d76ebda55d5e6e", "text": "A 1.5 GHz low noise amplifier for a Global Positioning System (GPS) receiver has been implemented in a 0.6 /spl mu/m CMOS process. This amplifier provides a forward gain of 22 dB with a noise figure of only 3.5 dB while drawing 30 mW from a 1.5 V supply. To the authors' knowledge, this represents the lowest noise figure reported to date for a CMOS amplifier operating above 1 GHz.", "title": "" }, { "docid": "9f9719336bf6497d7c71590ac61a433b", "text": "College and universities are increasingly using part-time, adjunct instructors on their faculties to facilitate greater fiscal flexibility. However, critics argue that the use of adjuncts is causing the quality of higher education to deteriorate. This paper addresses questions about the impact of adjuncts on student outcomes. Using a unique dataset of public four-year colleges in Ohio, we quantify how having adjunct instructors affects student persistence after the first year. Because students taking courses from adjuncts differ systematically from other students, we use an instrumental variable strategy to address concerns about biases. The findings suggest that, in general, students taking an \"adjunct-heavy\" course schedule in their first semester are adversely affected. They are less likely to persist into their second year. We reconcile these findings with previous research that shows that adjuncts may encourage greater student interest in terms of major choice and subsequent enrollments in some disciplines, most notably fields tied closely to specific professions. The authors are grateful for helpful suggestions from Ronald Ehrenberg and seminar participants at the NBER Labor Studies Meetings. The authors also thank the Ohio Board of Regents for their support during this research project. Rod Chu, Darrell Glenn, Robert Sheehan, and Andy Lechler provided invaluable access and help with the data. Amanda Starc, James Carlson, Erin Riley, and Suzan Akin provided excellent research assistance. All opinions and mistakes are our own. The authors worked equally on the project and are listed alphabetically.", "title": "" }, { "docid": "115fb4dcd7d5a1240691e430cd107dce", "text": "Human motion capture data, which are used to animate animation characters, have been widely used in many areas. To satisfy the high-precision requirement, human motion data are captured with a high frequency (120 frames/s) by a high-precision capture system. However, the high frequency and nonlinear structure make the storage, retrieval, and browsing of motion data challenging problems, which can be solved by keyframe extraction. Current keyframe extraction methods do not properly model two important characteristics of motion data, i.e., sparseness and Riemannian manifold structure. Therefore, we propose a new model called joint kernel sparse representation (SR), which is in marked contrast to all current keyframe extraction methods for motion data and can simultaneously model the sparseness and the Riemannian manifold structure. The proposed model completes the SR in a kernel-induced space with a geodesic exponential kernel, whereas the traditional SR cannot model the nonlinear structure of motion data in the Euclidean space. Meanwhile, because of several important modifications to traditional SR, our model can also exploit the relations between joints and solve two problems, i.e., the unreasonable distribution and redundancy of extracted keyframes, which current methods do not solve. Extensive experiments demonstrate the effectiveness of the proposed method.", "title": "" } ]
scidocsrr
ddd0bc1e647d084b60ab53d22620abc3
Large-Scale Identification of Malicious Singleton Files
[ { "docid": "87e583f3256576ffdd95853fc838a620", "text": "The sheer volume of new malware found each day is growing at an exponential pace. This growth has created a need for automatic malware triage techniques that determine what malware is similar, what malware is unique, and why. In this paper, we present BitShred, a system for large-scale malware similarity analysis and clustering, and for automatically uncovering semantic inter- and intra-family relationships within clusters. The key idea behind BitShred is using feature hashing to dramatically reduce the high-dimensional feature spaces that are common in malware analysis. Feature hashing also allows us to mine correlated features between malware families and samples using co-clustering techniques. Our evaluation shows that BitShred speeds up typical malware triage tasks by up to 2,365x and uses up to 82x less memory on a single CPU, all with comparable accuracy to previous approaches. We also develop a parallelized version of BitShred, and demonstrate scalability within the Hadoop framework.", "title": "" } ]
[ { "docid": "553719cb1cb8829ceaf8e0f1a40953ff", "text": "“The distinctive faculties of Man are visibly expressed in his elevated cranial domeda feature which, though much debased in certain savage races, essentially characterises the human species. But, considering that the Neanderthal skull is eminently simial, both in its general and particular characters, I feel myself constrained to believe that the thoughts and desires which once dwelt within it never soared beyond those of a brute. The Andamaner, it is indisputable, possesses but the dimmest conceptions of the existence of the Creator of the Universe: his ideas on this subject, and on his own moral obligations, place him very little above animals of marked sagacity; nevertheless, viewed in connection with the strictly human conformation of his cranium, they are such as to specifically identify him with Homo sapiens. Psychical endowments of a lower grade than those characterising the Andamaner cannot be conceived to exist: they stand next to brute benightedness. (.) Applying the above argument to the Neanderthal skull, and considering . that it more closely conforms to the brain-case of the Chimpanzee, . there seems no reason to believe otherwise than that similar darkness characterised the being to which the fossil belonged” (King, 1864; pp. 96).", "title": "" }, { "docid": "84903166bfeea7433e61c6992f637a25", "text": "Sampling-based optimal planners, such as RRT*, almost-surely converge asymptotically to the optimal solution, but have provably slow convergence rates in high dimensions. This is because their commitment to finding the global optimum compels them to prioritize exploration of the entire problem domain even as its size grows exponentially. Optimization techniques, such as CHOMP, have fast convergence on these problems but only to local optima. This is because they are exploitative, prioritizing the immediate improvement of a path even though this may not find the global optimum of nonconvex cost functions. In this paper, we present a hybrid technique that integrates the benefits of both methods into a single search. A key insight is that applying local optimization to a subset of edges likely to improve the solution avoids the prohibitive cost of optimizing every edge in a global search. This is made possible by Batch Informed Trees (BIT*), an informed global technique that orders its search by potential solution quality. In our algorithm, Regionally Accelerated BIT* (RABIT*), we extend BIT* by using optimization to exploit local domain information and find alternative connections for edges in collision and accelerate the search. This improves search performance in problems with difficult-to-sample homotopy classes (e.g., narrow passages) while maintaining almost-sure asymptotic convergence to the global optimum. Our experiments on simulated random worlds and real data from an autonomous helicopter show that on certain difficult problems, RABIT* converges 1.8 times faster than BIT*. Qualitatively, in problems with difficult-to-sample homotopy classes, we show that RABIT* is able to efficiently transform paths to avoid obstacles.", "title": "" }, { "docid": "adf9646a9c4c9e19a18f35a949d59f3d", "text": "In this study we present a review of the emerging eld of meta-knowledge components as practised over the past decade among a variety of practitioners. We use the arti cially-de ned term `meta-knowledge' to encompass all those di erent but overlapping notions used by the Arti cial Intelligence and Software Engineering communities to represent reusable modelling frameworks: ontologies, problem-solving methods, experience factories and experience bases, patterns, to name a few. We then elaborate on how meta-knowledge is deployed in the context of system's design to improve its reliability by consistency checking, enhance its reuse potential, and manage its knowledge sharing. We speculate on its usefulness and explore technologies for supporting deployment of meta-knowledge. We argue that, despite the di erent approaches being followed in systems design by divergent communities, meta-knowledge is present in all cases, in a tacit or explicit form, and its utilisation depends on pragmatic aspects which we try to identify and critically review on criteria of e ectiveness. keywords: Ontologies, Problem-Solving Methods, Experienceware, Patterns, Design Types, Cost-E ective Analysis.", "title": "" }, { "docid": "9c7bafb5279bca4deb90d603e8b59cfe", "text": "BACKGROUND\nVirtual reality (VR) is an evolving technology that has been applied in various aspects of medicine, including the treatment of phobia disorders, pain distraction interventions, surgical training, and medical education. These applications have served to demonstrate the various assets offered through the use of VR.\n\n\nOBJECTIVE\nTo provide a background and rationale for the application of VR to neuropsychological assessment.\n\n\nMETHODS\nA brief introduction to VR technology and a review of current ongoing neuropsychological research that integrates the use of this technology.\n\n\nCONCLUSIONS\nVR offers numerous assets that may enhance current neuropsychological assessment protocols and address many of the limitations faced by our traditional methods.", "title": "" }, { "docid": "cab0fd454701c0b302040a1875ab2865", "text": "They are susceptible to a variety of attacks, including node capture, physical tampering, and denial of service, while prompting a range of fundamental research challenges.", "title": "" }, { "docid": "78ae476295aa266a170a981a34767bdd", "text": "Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.", "title": "" }, { "docid": "663925d096212c6ea6685db879581551", "text": "Deep neural networks have shown promise in collaborative filtering (CF). However, existing neural approaches are either user-based or item-based, which cannot leverage all the underlying information explicitly. We propose CF-UIcA, a neural co-autoregressive model for CF tasks, which exploits the structural correlation in the domains of both users and items. The co-autoregression allows extra desired properties to be incorporated for different tasks. Furthermore, we develop an efficient stochastic learning algorithm to handle large scale datasets. We evaluate CF-UIcA on two popular benchmarks: MovieLens 1M and Netflix, and achieve state-of-the-art performance in both rating prediction and top-N recommendation tasks, which demonstrates the effectiveness of CF-UIcA.", "title": "" }, { "docid": "4105ebe68ca25c863f77dde3ff94dcdc", "text": "This paper deals with the increasingly important issue of proper handling of information security for electric power utilities. It is based on the efforts of CIGRE Joint Working Group (JWG) D2/B3/C2-01 on \"Security for Information Systems and Intranets in Electric Power System\" carried out between 2003 and 2006. The JWG has produced a technical brochure (TB), where the purpose to raise the awareness of information and cybersecurity in electric power systems, and gives some guidance on how to solve the security problem by focusing on security domain modeling, risk assessment methodology, and security framework building. Here in this paper, the focus is on the issue of awareness and to highlight some steps to achieve a framework for cybersecurity management. Also, technical considerations of some communication systems for substation automation are studied. Finally, some directions for further works in this vast area of information and cybersecurity are given.", "title": "" }, { "docid": "aff44289b241cdeef627bba97b68a505", "text": "Personalization is a ubiquitous phenomenon in our daily online experience. While such technology is critical for helping us combat the overload of information we face, in many cases, we may not even realize that our results are being tailored to our personal tastes and preferences. Worse yet, when such a system makes a mistake, we have little recourse to correct it.\n In this work, we propose a framework for addressing this problem by developing a new user-interpretable feature set upon which to base personalized recommendations. These features, which we call badges, represent fundamental traits of users (e.g., \"vegetarian\" or \"Apple fanboy\") inferred by modeling the interplay between a user's behavior and self-reported identity. Specifically, we consider the microblogging site Twitter, where users provide short descriptions of themselves in their profiles, as well as perform actions such as tweeting and retweeting. Our approach is based on the insight that we can define badges using high precision, low recall rules (e.g., \"Twitter profile contains the phrase 'Apple fanboy'\"), and with enough data, generalize to other users by observing shared behavior. We develop a fully Bayesian, generative model that describes this interaction, while allowing us to avoid the pitfalls associated with having positive-only data.\n Experiments on real Twitter data demonstrate the effectiveness of our model at capturing rich and interpretable user traits that can be used to provide transparency for personalization.", "title": "" }, { "docid": "13177a7395eed80a77571bd02a962bc9", "text": "Orexin-A and orexin-B are neuropeptides originally identified as endogenous ligands for two orphan G-protein-coupled receptors. Orexin neuropeptides (also known as hypocretins) are produced by a small group of neurons in the lateral hypothalamic and perifornical areas, a region classically implicated in the control of mammalian feeding behavior. Orexin neurons project throughout the central nervous system (CNS) to nuclei known to be important in the control of feeding, sleep-wakefulness, neuroendocrine homeostasis, and autonomic regulation. orexin mRNA expression is upregulated by fasting and insulin-induced hypoglycemia. C-fos expression in orexin neurons, an indicator of neuronal activation, is positively correlated with wakefulness and negatively correlated with rapid eye movement (REM) and non-REM sleep states. Intracerebroventricular administration of orexins has been shown to significantly increase food consumption, wakefulness, and locomotor activity in rodent models. Conversely, an orexin receptor antagonist inhibits food consumption. Targeted disruption of the orexin gene in mice produces a syndrome remarkably similar to human and canine narcolepsy, a sleep disorder characterized by excessive daytime sleepiness, cataplexy, and other pathological manifestations of the intrusion of REM sleep-related features into wakefulness. Furthermore, orexin knockout mice are hypophagic compared with weight and age-matched littermates, suggesting a role in modulating energy metabolism. These findings suggest that the orexin neuropeptide system plays a significant role in feeding and sleep-wakefulness regulation, possibly by coordinating the complex behavioral and physiologic responses of these complementary homeostatic functions.", "title": "" }, { "docid": "05eb344fb8b671542f6f0228774a5524", "text": "This paper presents an improved hardware structure for the computation of the Whirlpool hash function. By merging the round key computation with the data compression and by using embedded memories to perform part of the Galois Field (28) multiplication, a core can be implemented in just 43% of the area of the best current related art while achieving a 12% higher throughput. The proposed core improves the Throughput per Slice compared to the state of the art by 160%, achieving a throughput of 5.47 Gbit/s with 2110 slices and 32 BRAMs on a VIRTEX II Pro FPGA. Results for a real application are also presented by considering a polymorphic computational approach.", "title": "" }, { "docid": "71d065cd109392ae41bc96fe0cd2e0f4", "text": "Absence of an upper limb leads to severe impairments in everyday life, which can further influence the social and mental state. For these reasons, early developments in cosmetic and body-driven prostheses date some centuries ago, and they have been evolving ever since. Following the end of the Second World War, rapid developments in technology resulted in powered myoelectric hand prosthetics. In the years to come, these devices were common on the market, though they still suffered high user abandonment rates. The reasons for rejection were trifold - insufficient functionality of the hardware, fragile design, and cumbersome control. In the last decade, both academia and industry have reached major improvements concerning technical features of upper limb prosthetics and methods for their interfacing and control. Advanced robotic hands are offered by several vendors and research groups, with a variety of active and passive wrist options that can be articulated across several degrees of freedom. Nowadays, elbow joint designs include active solutions with different weight and power options. Control features are getting progressively more sophisticated, offering options for multiple sensor integration and multi-joint articulation. Latest developments in socket designs are capable of facilitating implantable and multiple surface electromyography sensors in both traditional and osseointegration-based systems. Novel surgical techniques in combination with modern, sophisticated hardware are enabling restoration of dexterous upper limb functionality. This article is aimed at reviewing the latest state of the upper limb prosthetic market, offering insights on the accompanying technologies and techniques. We also examine the capabilities and features of some of academia's flagship solutions and methods.", "title": "" }, { "docid": "359d3e06c221e262be268a7f5b326627", "text": "A method for the synthesis of multicoupled resonators filters with frequency-dependent couplings is presented. A circuit model of the filter that accurately represents the frequency responses over a very wide frequency band is postulated. The two-port parameters of the filter based on the circuit model are obtained by circuit analysis. The values of the circuit elements are synthesized by equating the two-port parameters obtained from the circuit analysis and the filtering characteristic function. Solutions similar to the narrowband case (where all the couplings are assumed frequency independent) are obtained analytically when all coupling elements are either inductive or capacitive. The synthesis technique is generalized to include all types of coupling elements. Several examples of wideband filters are given to demonstrate the synthesis techniques.", "title": "" }, { "docid": "c6f17a0d5f91c3cab9183bbc5fa2dfc3", "text": "In human beings, head is one of the most important parts. Injuries in this part can cause serious damages to overall health. In some cases, they can be fatal. The present paper analyses the deformations of a helmet mounted on a human head, using finite element method. It studies the amount of von Mises pressure and stress caused by a vertical blow from above on the skull. The extant paper aims at developing new methods for improving the design and achieving more energy absorption by applying more appropriate models. In this study, a thermoplastic damper is applied and modelled in order to reduce the amount of energy transferred to the skull and to minimize the damages inflicted on human head.", "title": "" }, { "docid": "a87b48ee446cbda34e8d878cffbd19bb", "text": "Introduction. In spite of significant changes in the management policies of intersexuality, clinical evidence show that not all pubertal or adult individuals live according to the assigned sex during infancy. Aim. The purpose of this study was to analyze the clinical management of an individual diagnosed as a female pseudohermaphrodite with congenital adrenal hyperplasia (CAH) simple virilizing form four decades ago but who currently lives as a monogamous heterosexual male. Methods. We studied the clinical files spanning from 1965 to 1991 of an intersex individual. In addition, we conducted a magnetic resonance imaging (MRI) study of the abdominoplevic cavity and a series of interviews using the oral history method. Main Outcome Measures. Our analysis is based on the clinical evidence that led to the CAH diagnosis in the 1960s in light of recent clinical testing to confirm such diagnosis. Results. Analysis of reported values for 17-ketosteroids, 17-hydroxycorticosteroids, from 24-hour urine samples during an 8-year period showed poor adrenal suppression in spite of adherence to treatment. A recent MRI study confirmed the presence of hyperplastic adrenal glands as well as the presence of a prepubertal uterus. Semistructured interviews with the individual confirmed a life history consistent with a male gender identity. Conclusions. Although the American Academy of Pediatrics recommends that XX intersex individuals with CAH should be assigned to the female sex, this practice harms some individuals as they may self-identify as males. In the absence of comorbid psychiatric factors, the discrepancy between infant sex assignment and gender identity later in life underlines the need for a reexamination of current standards of care for individuals diagnosed with CAH. Jorge JC, Echeverri C, Medina Y, and Acevedo P. Male gender identity in an xx individual with congenital adrenal hyperplasia. J Sex Med 2008;5:122–131.", "title": "" }, { "docid": "8f5a38fe598abc5f3bdc3fd01fb506b3", "text": "Existing region-based object detectors are limited to regions with fixed box geometry to represent objects, even if those are highly non-rectangular. In this paper we introduce DP-FCN, a deep model for object detection which explicitly adapts to shapes of objects with deformable parts. Without additional annotations, it learns to focus on discriminative elements and to align them, and simultaneously brings more invariance for classification and geometric information to refine localization. DP-FCN is composed of three main modules: a Fully Convolutional Network to efficiently maintain spatial resolution, a deformable part-based RoI pooling layer to optimize positions of parts and build invariance, and a deformation-aware localization module explicitly exploiting displacements of parts to improve accuracy of bounding box regression. We experimentally validate our model and show significant gains. DP-FCN achieves state-of-the-art performances of 83.1% and 80.9% on PASCAL VOC 2007 and 2012 with VOC data only.", "title": "" }, { "docid": "eb271acef996a9ba0f84a50b5055953b", "text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup", "title": "" }, { "docid": "dc1053623155e38f00bf70d7da145d5b", "text": "Genetic programming is combined with program analysis methods to repair bugs in off-the-shelf legacy C programs. Fitness is defined using negative test cases that exercise the bug to be repaired and positive test cases that encode program requirements. Once a successful repair is discovered, structural differencing algorithms and delta debugging methods are used to minimize its size. Several modifications to the GP technique contribute to its success: (1) genetic operations are localized to the nodes along the execution path of the negative test case; (2) high-level statements are represented as single nodes in the program tree; (3) genetic operators use existing code in other parts of the program, so new code does not need to be invented. The paper describes the method, reviews earlier experiments that repaired 11 bugs in over 60,000 lines of code, reports results on new bug repairs, and describes experiments that analyze the performance and efficacy of the evolutionary components of the algorithm.", "title": "" }, { "docid": "b43b1265aa990052a238f63991730cc7", "text": "This paper focuses on placement and chaining of virtualized network functions (VNFs) in Network Function Virtualization Infrastructures (NFVI) for emerging software networks serving multiple tenants. Tenants can request network services to the NFVI in the form of service function chains (in the IETF SFC sense) or VNF Forwarding Graphs (VNF-FG in the case of ETSI) in support of their applications and business. This paper presents efficient algorithms to provide solutions to this NP-Hard chain placement problem to support NFVI providers. Cost-efficient and improved scalability multi-stage graph and 2-Factor algorithms are presented and shown to find near-optimal solutions in few seconds for large instances.", "title": "" }, { "docid": "d99fdf7b559d5609bec3c179dee3cd58", "text": "This study aimed to describe dietary habits of Syrian adolescents attending secondary schools in Damascus and the surrounding areas. A descriptive, cross-sectional study was carried out on 3507 students in 2001. A stratified, 2-stage random cluster sample was used to sample the students. The consumption pattern of food items during the previous week was described. More than 50% of the students said that they had not consumed green vegetables and more than 35% had not consumed meat. More than 35% said that they consumed cheese and milk at least once a day. Only 11.8% consumed fruit 3 times or more daily. Potential determinants of the pattern of food consumption were arialysed. Weight control practices and other eating habits were also described.", "title": "" } ]
scidocsrr
8037941ca0ae544a972c24e9b4ca9403
Robust Lexical Features for Improved Neural Network Named-Entity Recognition
[ { "docid": "ebc8966779ba3b9e6a768f4c462093f5", "text": "Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003—significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.", "title": "" }, { "docid": "2afbb4e8963b9e6953fd6f7f8c595c06", "text": "Large-scale linguistically annotated corpora have played a crucial role in advancing the state of the art of key natural language technologies such as syntactic, semantic and discourse analyzers, and they serve as training data as well as evaluation benchmarks. Up till now, however, most of the evaluation has been done on monolithic corpora such as the Penn Treebank, the Proposition Bank. As a result, it is still unclear how the state-of-the-art analyzers perform in general on data from a variety of genres or domains. The completion of the OntoNotes corpus, a large-scale, multi-genre, multilingual corpus manually annotated with syntactic, semantic and discourse information, makes it possible to perform such an evaluation. This paper presents an analysis of the performance of publicly available, state-of-the-art tools on all layers and languages in the OntoNotes v5.0 corpus. This should set the benchmark for future development of various NLP components in syntax and semantics, and possibly encourage research towards an integrated system that makes use of the various layers jointly to improve overall performance.", "title": "" }, { "docid": "7ce314babce8509724f05beb4c3e5cdd", "text": "This paper presents WikiCoref, an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Our annotation scheme follows the one of OntoNotes with a few disparities. We annotated each markable with coreference type, mention type and the equivalent Freebase topic. Since most similar annotation efforts concentrate on very specific types of written text, mainly newswire, there is a lack of resources for otherwise over-used Wikipedia texts. The corpus described in this paper addresses this issue. We present a freely available resource we initially devised for improving coreference resolution algorithms dedicated to Wikipedia texts. Our corpus has no restriction on the topics of the documents being annotated, and documents of various sizes have been considered for annotation.", "title": "" } ]
[ { "docid": "2b77ac8576a02ddf79e5a447c3586215", "text": "A new scheme to sample signals defined on the nodes of a graph is proposed. The underlying assumption is that such signals admit a sparse representation in a frequency domain related to the structure of the graph, which is captured by the so-called graph-shift operator. Instead of using the value of the signal observed at a subset of nodes to recover the signal in the entire graph, the sampling scheme proposed here uses as input observations taken at a single node. The observations correspond to sequential applications of the graph-shift operator, which are linear combinations of the information gathered by the neighbors of the node. When the graph corresponds to a directed cycle (which is the support of time-varying signals), our method is equivalent to the classical sampling in the time domain. When the graph is more general, we show that the Vandermonde structure of the sampling matrix, critical when sampling time-varying signals, is preserved. Sampling and interpolation are analyzed first in the absence of noise, and then noise is considered. We then study the recovery of the sampled signal when the specific set of frequencies that is active is not known. Moreover, we present a more general sampling scheme, under which, either our aggregation approach or the alternative approach of sampling a graph signal by observing the value of the signal at a subset of nodes can be both viewed as particular cases. Numerical experiments illustrating the results in both synthetic and real-world graphs close the paper.", "title": "" }, { "docid": "abedd6f0896340a190750666b1d28d91", "text": "This study aimed to characterize the neural generators of the early components of the visual evoked potential (VEP) to isoluminant checkerboard stimuli. Multichannel scalp recordings, retinotopic mapping and dipole modeling techniques were used to estimate the locations of the cortical sources giving rise to the early C1, P1, and N1 components. Dipole locations were matched to anatomical brain regions visualized in structural magnetic resonance imaging (MRI) and to functional MRI (fMRI) activations elicited by the same stimuli. These converging methods confirmed previous reports that the C1 component (onset latency 55 msec; peak latency 90-92 msec) was generated in the primary visual area (striate cortex; area 17). The early phase of the P1 component (onset latency 72-80 msec; peak latency 98-110 msec) was localized to sources in dorsal extrastriate cortex of the middle occipital gyrus, while the late phase of the P1 component (onset latency 110-120 msec; peak latency 136-146 msec) was localized to ventral extrastriate cortex of the fusiform gyrus. Among the N1 subcomponents, the posterior N150 could be accounted for by the same dipolar source as the early P1, while the anterior N155 was localized to a deep source in the parietal lobe. These findings clarify the anatomical origin of these VEP components, which have been studied extensively in relation to visual-perceptual processes.", "title": "" }, { "docid": "d40a1b72029bdc8e00737ef84fdf5681", "text": "— Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer back-propagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.", "title": "" }, { "docid": "ec5148e728e1cce8058638d500f3804e", "text": "Identifying extremist-associated conversations on Twitter is an open problem. Extremist groups have been leveraging Twitter (1) to spread their message and (2) to gain recruits. In this paper, we investigate the problem of determining whether a particular Twitter user engages in extremist conversation. We explore different Twitter metrics as proxies for misbehavior, including the sentiment of the user's published tweets, the polarity of the user's ego-network, and user mentions. We compare different known classifiers using these different features on manually annotated tweets involving the ISIS extremist group and find that combining all these features leads to the highest accuracy for detecting extremism on Twitter.", "title": "" }, { "docid": "a441c8669fa094658e95aeddfe88f86d", "text": "It has been claimed that recent developments in the research on the efficiency of code generation and on graphical input/output interfacing have made it possible to use a functional language to write efficient programs that can compete with industrial applications written in a traditional imperative language. As one of the early steps in verifying this claim, this paper describes a first attempt to implement a spreadsheet in a lazy, purely functional language. An interesting aspect of the design is that the language with which the user specifies the relations between the cells of the spreadsheet is itself a lazy, purely functional and higher order language as well, and not some special dedicated spreadsheet language. Another interesting aspect of the design is that the spreadsheet incorporates symbolic reduction and normalisation of symbolic expressions (including equations). This introduces the possibility of asking the system to prove equality of symbolic cell expressions: a property which can greatly enhance the reliability of a particular user-defined spreadsheet. The resulting application is by no means a fully mature product. It is not intended as a competitor to commercially available spreadsheets. However, with its higher order lazy functional language and its symbolic capabilities it may serve as an interesting candidate to fill the gap between calculators with purely functional expressions and full-featured spreadsheets with dedicated non-functional spreadsheet languages. This paper describes the global design and important implementation issues in the development of the application. The experience gained and lessons learnt during this project are treated. Performance and use of the resulting application are compared with related work.", "title": "" }, { "docid": "653f7e6f8aac3464eeac88a5c2f21f2e", "text": "The decentralized electronic currency system Bitcoin gives the possibility to execute transactions via direct communication between users, without the need to resort to third parties entrusted with legitimizing the concerned monetary value. In its current state of development a recent, fast-changing, volatile and highly mediatized technology the discourses that unfold within spaces of information and discussion related to Bitcoin can be analysed in light of their ability to produce at once the representations of value, the practices according to which it is transformed and evolves, and the devices allowing for its implementation. The literature on the system is a testament to how the Bitcoin debates do not merely spread, communicate and diffuse representation of this currency, but are closely intertwined with the practice of the money itself. By focusing its attention on a specific corpus, that of expert discourse, the article shows how, introducing and discussing a specific device, dynamic or operation as being in some way related to trust, this expert knowledge contributes to the very definition and shaping of this trust within the Bitcoin system ultimately contributing to perform the shared definition of its value as a currency.", "title": "" }, { "docid": "9826dcd8970429b1f3398128eec4335b", "text": "This article provides an overview of recent contributions to the debate on the ethical use of previously collected biobank samples, as well as a country report about how this issue has been regulated in Spain by means of the new Biomedical Research Act, enacted in the summer of 2007. By contrasting the Spanish legal situation with the wider discourse of international bioethics, we identify and discuss a general trend moving from the traditional requirements of informed consent towards new models more favourable to research in a post-genomic context.", "title": "" }, { "docid": "bc5c008b5e443b83b2a66775c849fffb", "text": "Continuous glucose monitoring (CGM) sensors are portable devices that allow measuring and visualizing the glucose concentration in real time almost continuously for several days and are provided with hypo/hyperglycemic alerts and glucose trend information. CGM sensors have revolutionized Type 1 diabetes (T1D) management, improving glucose control when used adjunctively to self-monitoring blood glucose systems. Furthermore, CGM devices have stimulated the development of applications that were impossible to create without a continuous-time glucose signal, e.g., real-time predictive alerts of hypo/hyperglycemic episodes based on the prediction of future glucose concentration, automatic basal insulin attenuation methods for hypoglycemia prevention, and the artificial pancreas. However, CGM sensors' lack of accuracy and reliability limited their usability in the clinical practice, calling upon the academic community for the development of suitable signal processing methods to improve CGM performance. The aim of this paper is to review the past and present algorithmic challenges of CGM sensors, to show how they have been tackled by our research group, and to identify the possible future ones.", "title": "" }, { "docid": "ff5700d97ad00fcfb908d90b56f6033f", "text": "How to design a secure steganography method is the problem that researchers have always been concerned about. Traditionally, the steganography method is designed in a heuristic way which does not take into account the detection side (steganalysis) fully and automatically. In this paper, we propose a new strategy that generates more suitable and secure covers for steganography with adversarial learning scheme, named SSGAN. The proposed architecture has one generative network called G, and two discriminative networks called D and S, among which the former evaluates the visual quality of the generated images for steganography and the latter assesses their suitableness for information hiding. Different from the existing work, we use WGAN instead of GAN for the sake of faster convergence speed, more stable training, and higher quality images, and also re-design the S net with more sophisticated steganalysis network. The experimental results prove the effectiveness of the proposed method.", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" }, { "docid": "062ef386998d3c47e1f3845dec55499c", "text": "The purpose of this study was to examine the effectiveness of the Brain Breaks® Physical Activity Solutions in changing attitudes toward physical activity of school children in a community in Poland. In 2015, a sample of 326 pupils aged 9-11 years old from 19 classes at three selected primary schools were randomly assigned to control and experimental groups within the study. During the classes, children in the experimental group performed physical activities two times per day in three to five minutes using Brain Breaks® videos for four months, while the control group did not use the videos during the test period. Students' attitudes toward physical activities were assessed before and after the intervention using the \"Attitudes toward Physical Activity Scale\". Repeated measures of ANOVA were used to examine the change from pre- to post-intervention. Overall, a repeated measures ANOVA indicated time-by-group interaction effects in 'Self-efficacy on learning with video exercises', F(1.32) = 75.28, p = 0.00, η2 = 0.19. Although the changes are minor, there were benefits of the intervention. It may be concluded that HOPSports Brain Breaks® Physical Activity Program contributes to better self-efficacy on learning while using video exercise of primary school children.", "title": "" }, { "docid": "3ca04efcb370e8a30ab5ad42d1d2d047", "text": "The exceptionally adhesive foot of the gecko remains clean in dirty environments by shedding contaminants with each step. Synthetic gecko-inspired adhesives have achieved similar attachment strengths to the gecko on smooth surfaces, but the process of contact self-cleaning has yet to be effectively demonstrated. Here, we present the first gecko-inspired adhesive that has matched both the attachment strength and the contact self-cleaning performance of the gecko's foot on a smooth surface. Contact self-cleaning experiments were performed with three different sizes of mushroom-shaped elastomer microfibres and five different sizes of spherical silica contaminants. Using a load-drag-unload dry contact cleaning process similar to the loads acting on the gecko foot during locomotion, our fully contaminated synthetic gecko adhesives could recover lost adhesion at a rate comparable to that of the gecko. We observed that the relative size of contaminants to the characteristic size of the microfibres in the synthetic adhesive strongly determined how and to what degree the adhesive recovered from contamination. Our approximate model and experimental results show that the dominant mechanism of contact self-cleaning is particle rolling during the drag process. Embedding of particles between adjacent fibres was observed for particles with diameter smaller than the fibre tips, and further studied as a temporary cleaning mechanism. By incorporating contact self-cleaning capabilities, real-world applications of synthetic gecko adhesives, such as reusable tapes, clothing closures and medical adhesives, would become feasible.", "title": "" }, { "docid": "7350c0433fe1330803403e6aa03a2f26", "text": "An introduction is provided to Multi-Entity Bayesian Networks (MEBN), a logic system that integrates First Order Logic (FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated sub-structures. Knowledge is encoded as a collection of Bayesian network fragments (MFrags) that can be instantiated and combined to form highly complex situation-specific Bayesian networks. A MEBN theory (MTheory) implicitly represents a joint probability distribution over possibly unbounded numbers of hypotheses, and uses Bayesian learning to refine a knowledge base as observations accrue. MEBN provides a logical foundation for the emerging collection of highly expressive probability-based languages. A running example illustrates the representation and reasoning power of the MEBN formalism.", "title": "" }, { "docid": "ef99799bf977ba69a63c9f030fc65c7f", "text": "In this paper, we propose a novel transductive learning framework named manifold-ranking based image retrieval (MRBIR). Given a query image, MRBIR first makes use of a manifold ranking algorithm to explore the relationship among all the data points in the feature space, and then measures relevance between the query and all the images in the database accordingly, which is different from traditional similarity metrics based on pair-wise distance. In relevance feedback, if only positive examples are available, they are added to the query set to improve the retrieval result; if examples of both labels can be obtained, MRBIR discriminately spreads the ranking scores of positive and negative examples, considering the asymmetry between these two types of images. Furthermore, three active learning methods are incorporated into MRBIR, which select images in each round of relevance feedback according to different principles, aiming to maximally improve the ranking result. Experimental results on a general-purpose image database show that MRBIR attains a significant improvement over existing systems from all aspects.", "title": "" }, { "docid": "b75dd43655a70eaf0aaef43826de4337", "text": "Plagiarism detection has been considered as a classification problem which can be approximated with intrinsic strategies, considering self-based information from a given document, and external strategies, considering comparison techniques between a suspicious document and different sources. In this work, both intrinsic and external approaches for plagiarism detection are presented. First, the main contribution for intrinsic plagiarism detection is associated to the outlier detection approach for detecting changes in the author’s style. Then, the main contribution for the proposed external plagiarism detection is the space reduction technique to reduce the complexity of this plagiarism detection task. Results shows that our approach is highly competitive with respect to the leading research teams in plagiarism detection.", "title": "" }, { "docid": "4cd605375f5d27c754e4a21b81b39f1a", "text": "The dominant paradigm in drug discovery is the concept of designing maximally selective ligands to act on individual drug targets. However, many effective drugs act via modulation of multiple proteins rather than single targets. Advances in systems biology are revealing a phenotypic robustness and a network structure that strongly suggests that exquisitely selective compounds, compared with multitarget drugs, may exhibit lower than desired clinical efficacy. This new appreciation of the role of polypharmacology has significant implications for tackling the two major sources of attrition in drug development--efficacy and toxicity. Integrating network biology and polypharmacology holds the promise of expanding the current opportunity space for druggable targets. However, the rational design of polypharmacology faces considerable challenges in the need for new methods to validate target combinations and optimize multiple structure-activity relationships while maintaining drug-like properties. Advances in these areas are creating the foundation of the next paradigm in drug discovery: network pharmacology.", "title": "" }, { "docid": "73ec43c5ed8e245d0a1ff012a6a67f76", "text": "HERE IS MUCH signal processing devoted to detection and estimation. Detection is the task of detetmitdng if a specific signal set is pteaettt in an obs&tion, whflc estimation is the task of obtaining the va.iues of the parameters derriblng the signal. Often the s@tal is complicated or is corrupted by interfeting signals or noise To facilitate the detection and estimation of signal sets. the obsenation is decomposed by a basis set which spans the signal space [ 1) For many problems of engineering interest, the class of aigttlls being sought are periodic which leads quite natuallv to a decomposition by a basis consistittg of simple petiodic fun=tions, the sines and cosines. The classic Fourier tran.,fot,,, h the mechanism by which we M able to perform this decomposttmn. BY necessity, every observed signal we pmmust be of finite extent. The extent may be adjustable and Axtable. but it must be fire. Proces%ng a fiite-duration observation ~POSCS mteresting and interacting considentior,s on the hamomc analysic rhese consldentions include detectability of tones in the Presence of nearby strong tones, rcoohability of similarstrength nearby tones, tesolvability of Gxifting tona, and biases in estimating the parameten of my of the alonmenhoned signals. For practicality, the data we pare N unifomdy spaced samples of the obsetvcd signal. For convenience. N is highJy composite, and we will zwtme N is evett. The harmottic estm~afes we obtain UtmugJt the discrae Fowie~ tmnsfotm (DFT) arc N mifcwmly spaced samples of the asaciated periodic spectra. This approach in elegant and attnctive when the proce~ scheme is cast as a spectral decomposition in an N-dimensional orthogonal vector space 121. Unfottunately, in mmY practical situations, to obtain meaningful results this elegance must be compmmised. One such t=O,l;..,Nl.N.N+l.", "title": "" }, { "docid": "f3820e94a204cd07b04e905a9b1e4834", "text": "Successful analysis of player skills in video games has important impacts on the process of enhancing player experience without undermining their continuous skill development. Moreover, player skill analysis becomes more intriguing in team-based video games because such form of study can help discover useful factors in effective team formation. In this paper, we consider the problem of skill decomposition in MOBA (MultiPlayer Online Battle Arena) games, with the goal to understand what player skill factors are essential for the outcome of a game match. To understand the construct of MOBA player skills, we utilize various skill-based predictive models to decompose player skills into interpretative parts, the impact of which are assessed in statistical terms. We apply this analysis approach on two widely known MOBAs, namely League of Legends (LoL) and Defense of the Ancients 2 (DOTA2). The finding is that base skills of in-game avatars, base skills of players, and players’ champion-specific skills are three prominent skill components influencing LoL’s match outcomes, while those of DOTA2 are mainly impacted by in-game avatars’ base skills but not much by the other two. PLAYER SKILL DECOMPOSITION IN MULTIPLAYER ONLINE BATTLE ARENAS 3 Player Skill Decomposition in Multiplayer Online Battle Arenas", "title": "" }, { "docid": "8eee03189f757493797ed5be5f72c0fa", "text": "The long-term memory of most connectionist systems lies entirely in the weights of the system. Since the number of weights is typically fixed, this bounds the total amount of knowledge that can be learned and stored. Though this is not normally a problem for a neural network designed for a specific task, such a bound is undesirable for a system that continually learns over an open range of domains. To address this, we describe a lifelong learning system that leverages a fast, though non-differentiable, content-addressable memory which can be exploited to encode both a long history of sequential episodic knowledge and semantic knowledge over many episodes for an unbounded number of domains. This opens the door for investigation into transfer learning, and leveraging prior knowledge that has been learned over a lifetime of experiences to new domains.", "title": "" }, { "docid": "b95e6cc4d0e30e0f14ecc757e583502e", "text": "Over the last decade, it has become well-established that a captcha’s ability to withstand automated solving lies in the difficulty of segmenting the image into individual characters. The standard approach to solving captchas automatically has been a sequential process wherein a segmentation algorithm splits the image into segments that contain individual characters, followed by a character recognition step that uses machine learning. While this approach has been effective against particular captcha schemes, its generality is limited by the segmentation step, which is hand-crafted to defeat the distortion at hand. No general algorithm is known for the character collapsing anti-segmentation technique used by most prominent real world captcha schemes. This paper introduces a novel approach to solving captchas in a single step that uses machine learning to attack the segmentation and the recognition problems simultaneously. Performing both operations jointly allows our algorithm to exploit information and context that is not available when they are done sequentially. At the same time, it removes the need for any hand-crafted component, making our approach generalize to new captcha schemes where the previous approach can not. We were able to solve all the real world captcha schemes we evaluated accurately enough to consider the scheme insecure in practice, including Yahoo (5.33%) and ReCaptcha (33.34%), without any adjustments to the algorithm or its parameters. Our success against the Baidu (38.68%) and CNN (51.09%) schemes that use occluding lines as well as character collapsing leads us to believe that our approach is able to defeat occluding lines in an equally general manner. The effectiveness and universality of our results suggests that combining segmentation and recognition is the next evolution of catpcha solving, and that it supersedes the sequential approach used in earlier works. More generally, our approach raises questions about how to develop sufficiently secure captchas in the future.", "title": "" } ]
scidocsrr
d1b33ce49666fa755a6cd629a1faaf25
Simplified modeling and identification approach for model-based control of parallel mechanism robot leg
[ { "docid": "69e381983f7af393ee4bbb62bb587a4e", "text": "This paper presents the design principles for highly efficient legged robots, the implementation of the principles in the design of the MIT Cheetah, and the analysis of the high-speed trotting experimental results. The design principles were derived by analyzing three major energy-loss mechanisms in locomotion: heat losses from the actuators, friction losses in transmission, and the interaction losses caused by the interface between the system and the environment. Four design principles that minimize these losses are discussed: employment of high torque-density motors, energy regenerative electronic system, low loss transmission, and a low leg inertia. These principles were implemented in the design of the MIT Cheetah; the major design features are large gap diameter motors, regenerative electric motor drivers, single-stage low gear transmission, dual coaxial motors with composite legs, and the differential actuated spine. The experimental results of fast trotting are presented; the 33-kg robot runs at 22 km/h (6 m/s). The total power consumption from the battery pack was 973 W and resulted in a total cost of transport of 0.5, which rivals running animals' at the same scale. 76% of the total energy consumption is attributed to heat loss from the motor, and the remaining 24% is used in mechanical work, which is dissipated as interaction loss as well as friction losses at the joint and transmission.", "title": "" } ]
[ { "docid": "dd06c1c39e9b4a1ae9ee75c3251f27dc", "text": "Magnetoencephalographic measurements (MEG) were used to examine the effect on the human auditory cortex of removing specific frequencies from the acoustic environment. Subjects listened for 3 h on three consecutive days to music \"notched\" by removal of a narrow frequency band centered on 1 kHz. Immediately after listening to the notched music, the neural representation for a 1-kHz test stimulus centered on the notch was found to be significantly diminished compared to the neural representation for a 0.5-kHz control stimulus centered one octave below the region of notching. The diminished neural representation for 1 kHz reversed to baseline between the successive listening sessions. These results suggest that rapid changes can occur in the tuning of neurons in the adult human auditory cortex following manipulation of the acoustic environment. A dynamic form of neural plasticity may underlie the phenomenon observed here.", "title": "" }, { "docid": "c4256017c214eabda8e5b47c604e0e49", "text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.", "title": "" }, { "docid": "386af0520255ebd048cff30961973624", "text": "We present a linear optical receiver realized on 130 nm SiGe BiCMOS. Error-free operation assuming FEC is shown at bitrates up to 64 Gb/s (32 Gbaud) with 165mW power consumption, corresponding to 2.578 pJ/bit.", "title": "" }, { "docid": "d52bfde050e6535645c324e7006a50e7", "text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.", "title": "" }, { "docid": "ba87ca7a07065e25593e6ae5c173669d", "text": "The intelligence community (IC) is asked to predict outcomes that may often be inherently unpredictable-and is blamed for the inevitable forecasting failures, be they false positives or false negatives. To move beyond blame games of accountability ping-pong that incentivize bureaucratic symbolism over substantive reform, it is necessary to reach bipartisan agreements on performance indicators that are transparent enough to reassure clashing elites (to whom the IC must answer) that estimates have not been politicized. Establishing such transideological credibility requires (a) developing accuracy metrics for decoupling probability and value judgments; (b) using the resulting metrics as criterion variables in validity tests of the IC's selection, training, and incentive systems; and (c) institutionalizing adversarial collaborations that conduct level-playing-field tests of clashing perspectives.", "title": "" }, { "docid": "51fec678a2e901fdf109d4836ef1bf34", "text": "BACKGROUND\nFoot-and-mouth disease (FMD) is an acute, highly contagious disease that infects cloven-hoofed animals. Vaccination is an effective means of preventing and controlling FMD. Compared to conventional inactivated FMDV vaccines, the format of FMDV virus-like particles (VLPs) as a non-replicating particulate vaccine candidate is a promising alternative.\n\n\nRESULTS\nIn this study, we have developed a co-expression system in E. coli, which drove the expression of FMDV capsid proteins (VP0, VP1, and VP3) in tandem by a single plasmid. The co-expressed FMDV capsid proteins (VP0, VP1, and VP3) were produced in large scale by fermentation at 10 L scale and the chromatographic purified capsid proteins were auto-assembled as VLPs in vitro. Cattle vaccinated with a single dose of the subunit vaccine, comprising in vitro assembled FMDV VLP and adjuvant, developed FMDV-specific antibody response (ELISA antibodies and neutralizing antibodies) with the persistent period of 6 months. Moreover, cattle vaccinated with the subunit vaccine showed the high protection potency with the 50 % bovine protective dose (PD50) reaching 11.75 PD50 per dose.\n\n\nCONCLUSIONS\nOur data strongly suggest that in vitro assembled recombinant FMDV VLPs produced from E. coli could function as a potent FMDV vaccine candidate against FMDV Asia1 infection. Furthermore, the robust protein expression and purification approaches described here could lead to the development of industrial level large-scale production of E. coli-based VLPs against FMDV infections with different serotypes.", "title": "" }, { "docid": "a774567d957ed0ea209b470b8eced563", "text": "The vulnerability of the nervous system to advancing age is all too often manifest in neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. In this review article we describe evidence suggesting that two dietary interventions, caloric restriction (CR) and intermittent fasting (IF), can prolong the health-span of the nervous system by impinging upon fundamental metabolic and cellular signaling pathways that regulate life-span. CR and IF affect energy and oxygen radical metabolism, and cellular stress response systems, in ways that protect neurons against genetic and environmental factors to which they would otherwise succumb during aging. There are multiple interactive pathways and molecular mechanisms by which CR and IF benefit neurons including those involving insulin-like signaling, FoxO transcription factors, sirtuins and peroxisome proliferator-activated receptors. These pathways stimulate the production of protein chaperones, neurotrophic factors and antioxidant enzymes, all of which help cells cope with stress and resist disease. A better understanding of the impact of CR and IF on the aging nervous system will likely lead to novel approaches for preventing and treating neurodegenerative disorders.", "title": "" }, { "docid": "b5dc56272d4dea04b756a8614d6762c9", "text": "Platforms have been considered as a paradigm for managing new product development and innovation. Since their introduction, studies on platforms have introduced multiple conceptualizations, leading to a fragmentation of research and different perspectives. By systematically reviewing the platform literature and combining bibliometric and content analyses, this paper examines the platform concept and its evolution, proposes a thematic classification, and highlights emerging trends in the literature. Based on this hybrid methodological approach (bibliometric and content analyses), the results show that platform research has primarily focused on issues that are mainly related to firms' internal aspects, such as innovation, modularity, commonality, and mass customization. Moreover, scholars have recently started to focus on new research themes, including managerial questions related to capability building, strategy, and ecosystem building based on platforms. As its main contributions, this paper improves the understanding of and clarifies the evolutionary trajectory of the platform concept, and identifies trends and emerging themes to be addressed in future studies.", "title": "" }, { "docid": "9500dfc92149c5a808cec89b140fc0c3", "text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.", "title": "" }, { "docid": "a1bf728c54cec3f621a54ed23a623300", "text": "Machine learning algorithms are now common in the state-ofthe-art spoken language understanding models. But to reach good performance they must be trained on a potentially large amount of data which are not available for a variety of tasks and languages of interest. In this work, we present a novel zero-shot learning method, based on word embeddings, allowing to derive a full semantic parser for spoken language understanding. No annotated in-context data are needed, the ontological description of the target domain and generic word embedding features (learned from freely available general domain data) suffice to derive the model. Two versions are studied with respect to how the model parameters and decoding step are handled, including an extension of the proposed approach in the context of conditional random fields. We show that this model, with very little supervision, can reach instantly performance comparable to those obtained by either state-of-the-art carefully handcrafted rule-based or trained statistical models for extraction of dialog acts on the Dialog State Tracking test datasets (DSTC2 and 3).", "title": "" }, { "docid": "9941cd183e2c7b79d685e0e9cef3c43e", "text": "We present a novel recursive Bayesian method in the DFT-domain to address the multichannel acoustic echo cancellation problem. We model the echo paths between the loudspeakers and the near-end microphone as a multichannel random variable with a first-order Markov property. The incorporation of the near-end observation noise, in conjunction with the multichannel Markov model, leads to a multichannel state-space model. We derive a recursive Bayesian solution to the multichannel state-space model, which turns out to be well suited for input signals that are not only auto-correlated but also cross-correlated. We show that the resulting multichannel state-space frequency-domain adaptive filter (MCSSFDAF) can be efficiently implemented due to the submatrix-diagonality of the state-error covariance. The filter offers optimal tracking and robust adaptation in the presence of near-end noise and echo path variability.", "title": "" }, { "docid": "433e7a8c4d4a16f562f9ae112102526e", "text": "Although both extrinsic and intrinsic factors have been identified that orchestrate the differentiation and maturation of oligodendrocytes, less is known about the intracellular signaling pathways that control the overall commitment to differentiate. Here, we provide evidence that activation of the mammalian target of rapamycin (mTOR) is essential for oligodendrocyte differentiation. Specifically, mTOR regulates oligodendrocyte differentiation at the late progenitor to immature oligodendrocyte transition as assessed by the expression of stage specific antigens and myelin proteins including MBP and PLP. Furthermore, phosphorylation of mTOR on Ser 2448 correlates with myelination in the subcortical white matter of the developing brain. We demonstrate that mTOR exerts its effects on oligodendrocyte differentiation through two distinct signaling complexes, mTORC1 and mTORC2, defined by the presence of the adaptor proteins raptor and rictor, respectively. Disrupting mTOR complex formation via siRNA mediated knockdown of raptor or rictor significantly reduced myelin protein expression in vitro. However, mTORC2 alone controlled myelin gene expression at the mRNA level, whereas mTORC1 influenced MBP expression via an alternative mechanism. In addition, investigation of mTORC1 and mTORC2 targets revealed differential phosphorylation during oligodendrocyte differentiation. In OPC-DRG cocultures, inhibiting mTOR potently abrogated oligodendrocyte differentiation and reduced numbers of myelin segments. These data support the hypothesis that mTOR regulates commitment to oligodendrocyte differentiation before myelination.", "title": "" }, { "docid": "7c13132ef5b2d67c4a7e3039db252302", "text": "Accurate estimation of the click-through rate (CTR) in sponsored ads significantly impacts the user search experience and businesses’ revenue, even 0.1% of accuracy improvement would yield greater earnings in the hundreds of millions of dollars. CTR prediction is generally formulated as a supervised classification problem. In this paper, we share our experience and learning on model ensemble design and our innovation. Specifically, we present 8 ensemble methods and evaluate them on our production data. Boosting neural networks with gradient boosting decision trees turns out to be the best. With larger training data, there is a nearly 0.9% AUC improvement in offline testing and significant click yield gains in online traffic. In addition, we share our experience and learning on improving the quality of training.", "title": "" }, { "docid": "1d3007738c259cdf08f515849c7939b8", "text": "Background: With an increase in the number of disciplines contributing to health literacy scholarship, we sought to explore the nature of interdisciplinary research in the field. Objective: This study sought to describe disciplines that contribute to health literacy research and to quantify how disciplines draw from and contribute to an interdisciplinary evidence base, as measured by citation networks. Methods: We conducted a literature search for health literacy articles published between 1991 and 2015 in four bibliographic databases, producing 6,229 unique bibliographic records. We employed a scientometric tool (CiteSpace [Version 4.4.R1]) to quantify patterns in published health literacy research, including a visual path from cited discipline domains to citing discipline domains. Key Results: The number of health literacy publications increased each year between 1991 and 2015. Two spikes, in 2008 and 2013, correspond to the introduction of additional subject categories, including information science and communication. Two journals have been cited more than 2,000 times—the Journal of General Internal Medicine (n = 2,432) and Patient Education and Counseling (n = 2,252). The most recently cited journal added to the top 10 list of cited journals is the Journal of Health Communication (n = 989). Three main citation paths exist in the health literacy data set. Articles from the domain “medicine, medical, clinical” heavily cite from one domain (health, nursing, medicine), whereas articles from the domain “psychology, education, health” cite from two separate domains (health, nursing, medicine and psychology, education, social). Conclusions: Recent spikes in the number of published health literacy articles have been spurred by a greater diversity of disciplines contributing to the evidence base. However, despite the diversity of disciplines, citation paths indicate the presence of a few, self-contained disciplines contributing to most of the literature, suggesting a lack of interdisciplinary research. To address complex and evolving challenges in the health literacy field, interdisciplinary team science, that is, integrating science from across multiple disciplines, should continue to grow. [Health Literacy Research and Practice. 2017;1(4):e182-e191.] Plain Language Summary: The addition of diverse disciplines conducting health literacy scholarship has spurred recent spikes in the number of publications. However, citation paths suggest that interdisciplinary research can be strengthened. Findings directly align with the increasing emphasis on team science, and support opportunities and resources that incentivize interdisciplinary health literacy research. The study of health literacy has significantly expanded over the past decade. It represents a dynamic area of inquiry that extends to multiple disciplines. Health literacy emerged as a derivative of literacy and early definitions focused on the ability to read and understand medical instructions and health care information (Parker, Baker, Williams, & Nurss, 1995; Williams et al., 1995). This early work led to a body of research demonstrating that people with low health literacy generally had poorer health outcomes, including lower levels of screening and medication adherence rates (Baker,", "title": "" }, { "docid": "cdc276a3c4305d6c7ba763332ae933cc", "text": "Synthetic aperture radar (SAR) image classification is a fundamental process for SAR image understanding and interpretation. With the advancement of imaging techniques, it permits to produce higher resolution SAR data and extend data amount. Therefore, intelligent algorithms for high-resolution SAR image classification are demanded. Inspired by deep learning technology, an end-to-end classification model from the original SAR image to final classification map is developed to automatically extract features and conduct classification, which is named deep recurrent encoding neural networks (DRENNs). In our proposed framework, a spatial feature learning network based on long–short-term memory (LSTM) is developed to extract contextual dependencies of SAR images, where 2-D image patches are transformed into 1-D sequences and imported into LSTM to learn the latent spatial correlations. After LSTM, nonnegative and Fisher constrained autoencoders (NFCAEs) are proposed to improve the discrimination of features and conduct final classification, where nonnegative constraint and Fisher constraint are developed in each autoencoder to restrict the training of the network. The whole DRENN not only combines the spatial feature learning power of LSTM but also utilizes the discriminative representation ability of our NFCAE to improve the classification performance. The experimental results tested on three SAR images demonstrate that the proposed DRENN is able to learn effective feature representations from SAR images and produce competitive classification accuracies to other related approaches.", "title": "" }, { "docid": "b52cadf9e20eebfd388c09c51cff2d74", "text": "Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans. We show that even the widely recognized and by far most successful defense by Madry et al. (1) overfits on the L∞ metric (it’s highly susceptible to L2 and L0 perturbations), (2) classifies unrecognizable images with high certainty, (3) performs not much better than simple input binarization and (4) features adversarial perturbations that make little sense to humans. These results suggest that MNIST is far from being solved in terms of adversarial robustness. We present a novel robust classification model that performs analysis by synthesis using learned class-conditional data distributions. We derive bounds on the robustness and go to great length to empirically evaluate our model using maximally effective adversarial attacks by (a) applying decisionbased, score-based, gradient-based and transfer-based attacks for several different Lp norms, (b) by designing a new attack that exploits the structure of our defended model and (c) by devising a novel decision-based attack that seeks to minimize the number of perturbed pixels (L0). The results suggest that our approach yields state-of-the-art robustness on MNIST against L0, L2 and L∞ perturbations and we demonstrate that most adversarial examples are strongly perturbed towards the perceptual boundary between the original and the adversarial class.", "title": "" }, { "docid": "6e0877f16e624bef547f76b80278f760", "text": "The importance of storytelling as the foundation of human experiences cannot be overestimated. The oral traditions focus upon educating and transmitting knowledge and skills and also evolved into one of the earliest methods of communicating scientific discoveries and developments. A wide ranging search of the storytelling, education and health-related literature encompassing the years 1975-2007 was performed. Evidence from disparate elements of education and healthcare were used to inform an exploration of storytelling. This conceptual paper explores the principles of storytelling, evaluates the use of storytelling techniques in education in general, acknowledges the role of storytelling in healthcare delivery, identifies some of the skills learned and benefits derived from storytelling, and speculates upon the use of storytelling strategies in nurse education. Such stories have, until recently been harvested from the experiences of students and of educators, however, there is a growing realization that patients and service users are a rich source of healthcare-related stories that can affect, change and benefit clinical practice. The use of technology such as the Internet discussion boards or digitally-facilitated storytelling has an evolving role in ensuring that patient-generated and experiential stories have a future within nurse education.", "title": "" }, { "docid": "64770c350dc1d260e24a43760d4e641b", "text": "A first step in the task of automatically generating questions for testing reading comprehension is to identify questionworthy sentences, i.e. sentences in a text passage that humans find it worthwhile to ask questions about. We propose a hierarchical neural sentence-level sequence tagging model for this task, which existing approaches to question generation have ignored. The approach is fully data-driven — with no sophisticated NLP pipelines or any hand-crafted rules/features — and compares favorably to a number of baselines when evaluated on the SQuAD data set. When incorporated into an existing neural question generation system, the resulting end-to-end system achieves stateof-the-art performance for paragraph-level question generation for reading comprehension.", "title": "" }, { "docid": "76eef8117ac0bc5dbb0529477d10108d", "text": "Most existing switched-capacitor (SC) DC-DC converters only offer a few voltage conversion ratios (VCRs), leading to significant efficiency fluctuations under wide input/output dynamics (e.g. up to 30% in [1]). Consequently, systematic SC DC-DC converters with fine-grained VCRs (FVCRs) become attractive to achieve high efficiency over a wide operating range. Both the Recursive SC (RSC) [2,3] and Negator-based SC (NSC) [4] topologies offer systematic FVCR generations with high conductance, but their binary-switching nature fundamentally results in considerable parasitic loss. In bulk CMOS, the restriction of using low-parasitic MIM capacitors for high efficiency ultimately limits their achievable power density to <1mW/mm2. This work reports a fully integrated fine-grained buck-boost SC DC-DC converter with 24 VCRs. It features an algorithmic voltage-feed-in (AVFI) topology to systematically generate any arbitrary buck-boost rational ratio with optimal conduction loss while achieving the lowest parasitic loss compared with [2,4]. With 10 main SC cells (MCs) and 10 auxiliary SC cells (ACs) controlled by the proposed reference-selective bootstrapping driver (RSBD) for wide-range efficient buck-boost operations, the AVFI converter in 65nm bulk CMOS achieves a peak efficiency of 84.1% at a power density of 13.2mW/mm2 over a wide range of input (0.22 to 2.4V) and output (0.85 to 1.2V).", "title": "" }, { "docid": "32b96d4d23a03b1828f71496e017193e", "text": "Camera-based lane detection algorithms are one of the key enablers for many semi-autonomous and fullyautonomous systems, ranging from lane keep assist to level-5 automated vehicles. Positioning a vehicle between lane boundaries is the core navigational aspect of a self-driving car. Even though this should be trivial, given the clarity of lane markings on most standard roadway systems, the process is typically mired with tedious pre-processing and computational effort. We present an approach to estimate lane positions directly using a deep neural network that operates on images from laterally-mounted down-facing cameras. To create a diverse training set, we present a method to generate semi-artificial images. Besides the ability to distinguish whether there is a lane-marker present or not, the network is able to estimate the position of a lane marker with sub-centimeter accuracy at an average of 100 frames/s on an embedded automotive platform, requiring no pre-or post-processing. This system can be used not only to estimate lane position for navigation, but also provide an efficient way to validate the robustness of driver-assist features which depend on lane information.", "title": "" } ]
scidocsrr
c4eedc71b62029bcf2f2c6bd4bfdd969
The evolutionary psychology of facial beauty.
[ { "docid": "0e74994211d0e3c1e85ba0c85aba3df5", "text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.", "title": "" }, { "docid": "1fc10d626c7a06112a613f223391de26", "text": "The question of what makes a face attractive, and whether our preferences come from culture or biology, has fascinated scholars for centuries. Variation in the ideals of beauty across societies and historical periods suggests that standards of beauty are set by cultural convention. Recent evidence challenges this view, however, with infants as young as 2 months of age preferring to look at faces that adults find attractive (Langlois et al., 1987), and people from different cultures showing considerable agreement about which faces are attractive (Cun-for a review). These findings raise the possibility that some standards of beauty may be set by nature rather than culture. Consistent with this view, specific preferences have been identified that appear to be part of our biological rather than Such a preference would be adaptive if stabilizing selection operates on facial traits (Symons, 1979), or if averageness is associated with resistance to pathogens , as some have suggested Evolutionary biologists have proposed that a preference for symmetry would also be adaptive because symmetry is a signal of health and genetic quality Only high-quality individuals can maintain symmetric development in the face of environmental and genetic stresses. Symmetric bodies are certainly attractive to humans and many other animals but what about symmetric faces? Biologists suggest that facial symmetry should be attractive because it may signal mate quality High levels of facial asymmetry in individuals with chro-mosomal abnormalities (e.g., Down's syndrome and Tri-somy 14; for a review, see Thornhill & Møller, 1997) are consistent with this view, as is recent evidence that facial symmetry levels correlate with emotional and psychological health (Shackelford & Larsen, 1997). In this paper, we investigate whether people can detect subtle differences in facial symmetry and whether these differences are associated with differences in perceived attractiveness. Recently, Kowner (1996) has reported that faces with normal levels of asymmetry are more attractive than perfectly symmetric versions of the same faces. 3 Similar results have been reported by Langlois et al. and an anonymous reviewer for helpful comments on an earlier version of the manuscript. We also thank Graham Byatt for assistance with stimulus construction, Linda Jeffery for assistance with the figures, and Alison Clark and Catherine Hickford for assistance with data collection and statistical analysis in Experiment 1A. Evolutionary, as well as cultural, pressures may contribute to our perceptions of facial attractiveness. Biologists predict that facial symmetry should be attractive, because it may signal …", "title": "" }, { "docid": "6b6943e2b263fa0d4de934e563a6cc39", "text": "Average faces are attractive, but what is average depends on experience. We examined the effect of brief exposure to consistent facial distortions on what looks normal (average) and what looks attractive. Adaptation to a consistent distortion shifted what looked most normal, and what looked most attractive, toward that distortion. These normality and attractiveness aftereffects occurred when the adapting and test faces differed in orientation by 90 degrees (+45 degrees vs. -45 degrees ), suggesting adaptation of high-level neurons whose coding is not strictly retino- topic. Our results suggest that perceptual adaptation can rapidly recalibrate people's preferences to fit the faces they see. The results also suggest that average faces are attractive because of their central location in a distribution of faces (i.e., prototypicality), rather than because of any intrinsic appeal of particular physical characteristics. Recalibration of preferences may have important consequences, given the powerful effects of perceived attractiveness on person perception, mate choice, social interactions, and social outcomes for individuals.", "title": "" } ]
[ { "docid": "3b988fe1c91096f67461dc9fc7bb6fae", "text": "The paper analyzes the test setup required by the International Electrotechnical Commission (IEC) 61000-4-4 to evaluate the immunity of electronic equipment to electrical fast transients (EFTs), and proposes an electrical model of the capacitive coupling clamp, which is employed to add disturbances to nominal signals. The study points out limits on accuracy of this model, and shows how it can be fruitfully employed to predict the interference waveform affecting nominal system signals through computer simulations.", "title": "" }, { "docid": "85eb1b34bf15c6b5dcd8778146bfcfca", "text": "A novel face recognition algorithm is presented in this paper. Histogram of Oriented Gradient features are extracted both for the test image and also for the training images and given to the Support Vector Machine classifier. The detailed steps of HOG feature extraction and the classification using SVM is presented. The algorithm is compared with the Eigen feature based face recognition algorithm. The proposed algorithm and PCA are verified using 8 different datasets. Results show that in all the face datasets the proposed algorithm shows higher face recognition rate when compared with the traditional Eigen feature based face recognition algorithm. There is an improvement of 8.75% face recognition rate when compared with PCA based face recognition algorithm. The experiment is conducted on ORL database with 2 face images for testing and 8 face images for training for each person. Three performance curves namely CMC, EPC and ROC are considered. The curves show that the proposed algorithm outperforms when compared with PCA algorithm. IndexTerms: Facial features, Histogram of Oriented Gradients, Support Vector Machine, Principle Component Analysis.", "title": "" }, { "docid": "ebedc7f86c7a424091777f360f979122", "text": "Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites. Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.", "title": "" }, { "docid": "1df39d26ed1d156c1c093d7ffd1bb5bf", "text": "Contemporary advances in addiction neuroscience have paralleled increasing interest in the ancient mental training practice of mindfulness meditation as a potential therapy for addiction. In the past decade, mindfulness-based interventions (MBIs) have been studied as a treatment for an array addictive behaviors, including drinking, smoking, opioid misuse, and use of illicit substances like cocaine and heroin. This article reviews current research evaluating MBIs as a treatment for addiction, with a focus on findings pertaining to clinical outcomes and biobehavioral mechanisms. Studies indicate that MBIs reduce substance misuse and craving by modulating cognitive, affective, and psychophysiological processes integral to self-regulation and reward processing. This integrative review provides the basis for manifold recommendations regarding the next wave of research needed to firmly establish the efficacy of MBIs and elucidate the mechanistic pathways by which these therapies ameliorate addiction. Issues pertaining to MBI treatment optimization and sequencing, dissemination and implementation, dose-response relationships, and research rigor and reproducibility are discussed.", "title": "" }, { "docid": "fb809c5e2a15a49a449a818a1b0d59a5", "text": "Neural responses are modulated by brain state, which varies with arousal, attention, and behavior. In mice, running and whisking desynchronize the cortex and enhance sensory responses, but the quiescent periods between bouts of exploratory behaviors have not been well studied. We found that these periods of \"quiet wakefulness\" were characterized by state fluctuations on a timescale of 1-2 s. Small fluctuations in pupil diameter tracked these state transitions in multiple cortical areas. During dilation, the intracellular membrane potential was desynchronized, sensory responses were enhanced, and population activity was less correlated. In contrast, constriction was characterized by increased low-frequency oscillations and higher ensemble correlations. Specific subtypes of cortical interneurons were differentially activated during dilation and constriction, consistent with their participation in the observed state changes. Pupillometry has been used to index attention and mental effort in humans, but the intracellular dynamics and differences in population activity underlying this phenomenon were previously unknown.", "title": "" }, { "docid": "39c597ee9c9d9392e803aedeeeb28de9", "text": "BACKGROUND\nApalutamide, a competitive inhibitor of the androgen receptor, is under development for the treatment of prostate cancer. We evaluated the efficacy of apalutamide in men with nonmetastatic castration-resistant prostate cancer who were at high risk for the development of metastasis.\n\n\nMETHODS\nWe conducted a double-blind, placebo-controlled, phase 3 trial involving men with nonmetastatic castration-resistant prostate cancer and a prostate-specific antigen doubling time of 10 months or less. Patients were randomly assigned, in a 2:1 ratio, to receive apalutamide (240 mg per day) or placebo. All the patients continued to receive androgen-deprivation therapy. The primary end point was metastasis-free survival, which was defined as the time from randomization to the first detection of distant metastasis on imaging or death.\n\n\nRESULTS\nA total of 1207 men underwent randomization (806 to the apalutamide group and 401 to the placebo group). In the planned primary analysis, which was performed after 378 events had occurred, median metastasis-free survival was 40.5 months in the apalutamide group as compared with 16.2 months in the placebo group (hazard ratio for metastasis or death, 0.28; 95% confidence interval [CI], 0.23 to 0.35; P<0.001). Time to symptomatic progression was significantly longer with apalutamide than with placebo (hazard ratio, 0.45; 95% CI, 0.32 to 0.63; P<0.001). The rate of adverse events leading to discontinuation of the trial regimen was 10.6% in the apalutamide group and 7.0% in the placebo group. The following adverse events occurred at a higher rate with apalutamide than with placebo: rash (23.8% vs. 5.5%), hypothyroidism (8.1% vs. 2.0%), and fracture (11.7% vs. 6.5%).\n\n\nCONCLUSIONS\nAmong men with nonmetastatic castration-resistant prostate cancer, metastasis-free survival and time to symptomatic progression were significantly longer with apalutamide than with placebo. (Funded by Janssen Research and Development; SPARTAN ClinicalTrials.gov number, NCT01946204 .).", "title": "" }, { "docid": "68612f23057840e01bec9673c5d31865", "text": "The current status of studies of online shopping attitudes and behavior is investigated through an analysis of 35 empirical articles found in nine primary Information Systems (IS) journals and three major IS conference proceedings. A taxonomy is developed based on our analysis. A conceptual model of online shopping is presented and discussed in light of existing empirical studies. Areas for further research are discussed.", "title": "" }, { "docid": "dd66e07814419e3c2515d882d662df93", "text": "Excess body weight (adiposity) and physical inactivity are increasingly being recognized as major nutritional risk factors for cancer, and especially for many of those cancer types that have increased incidence rates in affluent, industrialized parts of the world. In this review, an overview is presented of some key biological mechanisms that may provide important metabolic links between nutrition, physical activity and cancer, including insulin resistance and reduced glucose tolerance, increased activation of the growth hormone/IGF-I axis, alterations in sex-steroid synthesis and/or bioavailability, and low-grade chronic inflammation through the effects of adipokines and cytokines.", "title": "" }, { "docid": "46c8336f395d04d49369d406f41b0602", "text": "Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.", "title": "" }, { "docid": "3d56d2c4b3b326bc676536d35b4bd77f", "text": "In this work an experimental study about the capability of the LBP, HOG descriptors and color for clothing attribute classification is presented. Two different variants of the LBP descriptor are considered, the original LBP and the uniform LBP. Two classifiers, Linear SVM and Random Forest, have been included in the comparison because they have been frequently used in clothing attributes classification. The experiments are carried out with a public available dataset, the clothing attribute dataset, that has 26 attributes in total. The obtained accuracies are over 75% in most cases, reaching 80% for the necktie or sleeve length attributes.", "title": "" }, { "docid": "f7a1624a4827e95b961eb164022aa2a2", "text": "Mitotic chromosome condensation, sister chromatid cohesion, and higher order folding of interphase chromatin are mediated by condensin and cohesin, eukaryotic members of the SMC (structural maintenance of chromosomes)-kleisin protein family. Other members facilitate chromosome segregation in bacteria [1]. A hallmark of these complexes is the binding of the two ends of a kleisin subunit to the apices of V-shaped Smc dimers, creating a tripartite ring capable of entrapping DNA (Figure 1A). In addition to creating rings, kleisins recruit regulatory subunits. One family of regulators, namely Kite dimers (Kleisin interacting winged-helix tandem elements), interact with Smc-kleisin rings from bacteria, archaea and the eukaryotic Smc5-6 complex, but not with either condensin or cohesin [2]. These instead possess proteins containing HEAT (Huntingtin/EF3/PP2A/Tor1) repeat domains whose origin and distribution have not yet been characterized. Using a combination of profile Hidden Markov Model (HMM)-based homology searches, network analysis and structural alignments, we identify a common origin for these regulators, for which we propose the name Hawks, i.e. HEAT proteins associated with kleisins.", "title": "" }, { "docid": "3f88c453eab8b2fbfffbf98fee34d086", "text": "Face recognition become one of the most important and fastest growing area during the last several years and become the most successful application of image analysis and broadly used in security system. It has been a challenging, interesting, and fast growing area in real time applications. The propose method is tested using a benchmark ORL database that contains 400 images of 40 persons. Pre-Processing technique are applied on the ORL database to increase the recognition rate. The best recognition rate is 97.5% when tested using 9 training images and 1 testing image. Increasing image database brightness is efficient and will increase the recognition rate. Resizing images using 0.3 scale is also efficient and will increase the recognition rate. PCA is used for feature extraction and dimension reduction. Euclidean distance is used for matching process.", "title": "" }, { "docid": "785a6d08ef585302d692864d09b026fe", "text": "Linear Discriminant Analysis (LDA) is a well-known method for dimensionality reduction and classification. LDA in the binaryclass case has been shown to be equivalent to linear regression with the class label as the output. This implies that LDA for binary-class classifications can be formulated as a least squares problem. Previous studies have shown certain relationship between multivariate linear regression and LDA for the multi-class case. Many of these studies show that multivariate linear regression with a specific class indicator matrix as the output can be applied as a preprocessing step for LDA. However, directly casting LDA as a least squares problem is challenging for the multi-class case. In this paper, a novel formulation for multivariate linear regression is proposed. The equivalence relationship between the proposed least squares formulation and LDA for multi-class classifications is rigorously established under a mild condition, which is shown empirically to hold in many applications involving high-dimensional data. Several LDA extensions based on the equivalence relationship are discussed.", "title": "" }, { "docid": "b3b050c35a1517dc52351cd917d0665a", "text": "The amount of information shared via social media is rapidly increasing amid growing concerns over online privacy. This study investigates the effect of controversiality and social endorsement of media content on sharing behavior when choosing between sharing publicly or anonymously. Anonymous sharing is found to be a popular choice (59% of shares), especially for controversial content which is 3.2x more likely to be shard anonymously. Social endorsement was not found to affect sharing behavior, except for sports-related content. Implications for social media interface design are dis-", "title": "" }, { "docid": "5724b84f9c00c503066bd6a178664c3c", "text": "A simple quantitative model is presented that is consistent with the available evidence about the British economy during the early phase of the Industrial Revolution. The basic model is a variant of a standard growth model, calibrated to data from Great Britain for the period 1780-1850. The model is used to study the importance of foreign trade and the role of the declining cost of power during this period. The British Industrial Revolution was an amazing episode, with economic consequences that changed the world. But our understanding of the economic events of this ¤Research Department, Federal Reserve Bank of Minneapolis, and Department of Economics, University of Chicago. I am grateful to Matthias Doepke for many stimulating conversations, as well as several useful leads on data sources. I also owe more than the ususal thanks to Joel Mokyr for many helpful comments, including several that changed the direction of the paper in a fundamental way. Finally, I am grateful to the Research Division of Federal Reserve Bank of Minneapolis for support while much of this work was done. This paper is being prepared for the Carnegie-Rochester conference in November, 2000.", "title": "" }, { "docid": "567d165eb9ad5f9860f3e0602cbe3e03", "text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.", "title": "" }, { "docid": "4706f9e8d9892543aaeb441c45816b24", "text": "The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.", "title": "" }, { "docid": "b49e61ecb2afbaa8c3b469238181ec26", "text": "Stylistic variations of language, such as formality, carry speakers’ intention beyond literal meaning and should be conveyed adequately in translation. We propose to use lexical formality models to control the formality level of machine translation output. We demonstrate the effectiveness of our approach in empirical evaluations, as measured by automatic metrics and human assessments.", "title": "" }, { "docid": "ef49eeb766313743edb77f8505e491a0", "text": "In 1998, a clinical classification of pulmonary hypertension (PH) was established, categorizing PH into groups which share similar pathological and hemodynamic characteristics and therapeutic approaches. During the 5th World Symposium held in Nice, France, in 2013, the consensus was reached to maintain the general scheme of previous clinical classifications. However, modifications and updates especially for Group 1 patients (pulmonary arterial hypertension [PAH]) were proposed. The main change was to withdraw persistent pulmonary hypertension of the newborn (PPHN) from Group 1 because this entity carries more differences than similarities with other PAH subgroups. In the current classification, PPHN is now designated number 1. Pulmonary hypertension associated with chronic hemolytic anemia has been moved from Group 1 PAH to Group 5, unclear/multifactorial mechanism. In addition, it was decided to add specific items related to pediatric pulmonary hypertension in order to create a comprehensive, common classification for both adults and children. Therefore, congenital or acquired left-heart inflow/outflow obstructive lesions and congenital cardiomyopathies have been added to Group 2, and segmental pulmonary hypertension has been added to Group 5. Last, there were no changes for Groups 2, 3, and 4.", "title": "" }, { "docid": "36fa816c5e738ea6171851fb3200f68d", "text": "Vehicle speed prediction provides important information for many intelligent vehicular and transportation applications. Accurate on-road vehicle speed prediction is challenging, because an individual vehicle speed is affected by many factors, e.g., the traffic condition, vehicle type, and driver’s behavior, in either deterministic or stochastic way. This paper proposes a novel data-driven vehicle speed prediction method in the context of vehicular networks, in which the real-time traffic information is accessible and utilized for vehicle speed prediction. It first predicts the average traffic speeds of road segments by using neural network models based on historical traffic data. Hidden Markov models (HMMs) are then utilized to present the statistical relationship between individual vehicle speeds and the traffic speed. Prediction for individual vehicle speeds is realized by applying the forward–backward algorithm on HMMs. To evaluate the prediction performance, simulations are set up in the SUMO microscopic traffic simulator with the application of a real Luxembourg motorway network and traffic count data. The vehicle speed prediction result shows that our proposed method outperforms other ones in terms of prediction accuracy.", "title": "" } ]
scidocsrr
a254189588a62d5bcead728bfa07c8bc
How the relationship between the crisis life cycle and mass media content can better inform crisis communication .
[ { "docid": "aaebd4defcc22d6b1e8e617ab7f3ec70", "text": "In the American political process, news discourse concerning public policy issues is carefully constructed. This occurs in part because both politicians and interest groups take an increasingly proactive approach to amplify their views of what an issue is about. However, news media also play an active role in framing public policy issues. Thus, in this article, news discourse is conceived as a sociocognitive process involving all three players: sources, journalists, and audience members operating in the universe of shared culture and on the basis of socially defined roles. Framing analysis is presented as a constructivist approach to examine news discourse with the primary focus on conceptualizing news texts into empirically operationalizable dimensions—syntactical, script, thematic, and rhetorical structures—so that evidence of the news media's framing of issues in news texts may be gathered. This is considered an initial step toward analyzing the news discourse process as a whole. Finally, an extended empirical example is provided to illustrate the applications of this conceptual framework of news texts.", "title": "" } ]
[ { "docid": "ff1f503123ce012b478a3772fa9568b5", "text": "Cementoblastoma is a rare odontogenic tumor that has distinct clinical and radiographical features normally suggesting the correct diagnosis. The clinicians and oral pathologists must have in mind several possible differential diagnoses that can lead to a misdiagnosed lesion, especially when unusual clinical features are present. A 21-year-old male presented with dull pain in lower jaw on right side. The clinical inspection of the region was non-contributory to the diagnosis but the lesion could be appreciated on palpation. A swelling was felt in the alveolar region of mandibular premolar-molar on right side. Radiographic examination was suggestive of benign cementoblastoma and the tumor was removed surgically along with tooth. The diagnosis was confirmed by histopathologic study. Although this neoplasm is rare, the dental practitioner should be aware of the clinical, radiographical and histopathological features that will lead to its early diagnosis and treatment.", "title": "" }, { "docid": "d4e22e73965bcd9fdb1628711d6beb44", "text": "This project is designed to measure heart beat (pulse count), by using embedded technology. In this project simultaneously it can measure and monitor the patient’s condition. This project describes the design of a simple, low-cost controller based wireless patient monitoring system. Heart rate of the patient is measured from the thumb finger using IRD (Infra Red Device sensor).Pulse counting sensor is arranged to check whether the heart rate is normal or not. So that a SMS is sent to the mobile number using GSM module interfaced to the controller in case of abnormal condition. A buzzer alert is also given. The heart rate can be measured by monitoring one's pulse using specialized medical devices such as an electrocardiograph (ECG), portable device e.g. The patient heart beat monitoring systems is one of the major wrist strap watch, or any other commercial heart rate monitors which normally consisting of a chest strap with electrodes. Despite of its accuracy, somehow it is costly, involve many clinical settings and patient must be attended by medical experts for continuous monitoring.", "title": "" }, { "docid": "02effa562af44c07076b4ab853642945", "text": "Purpose – The purpose of this paper is to explore the impact of corporate social responsibility (CSR) engagement on employee motivation, job satisfaction and organizational identification as well as employee citizenship in voluntary community activities. Design/methodology/approach – Employees (n 1⁄4 224) of a major airline carrier participated in the study based on a 54-item questionnaire, containing four different sets of items related to volunteering, motivation, job satisfaction and organizational identification. The employee sample consisted of two sub-samples drawn randomly from the company pool of employees, differentiating between active participants in the company’s CSR programs (APs) and non participants (NAPs). Findings – Significant differences were found between APs and NAPs on organizational identification and motivation, but not for job satisfaction. In addition, positive significant correlations between organizational identification, volunteering, job satisfaction, and motivation were obtained. These results are interpreted within the broader context that ties social identity theory (SIT) and organizational identification increase. Practical implications – The paper contributes to the understanding of the interrelations between CSR and other organizational behavior constructs. Practitioners can learn from this study how to increase job satisfaction and organizational identification. Both are extremely important for an organization’s sustainability. Originality/value – This is a first attempt to investigate the relationship between CSR, organizational identification and motivation, comparing two groups from the same organization. The paper discusses the questions: ‘‘Are there potential gains at the intra-organizational level in terms of enhanced motivation and organizational attitudes on the part of employees?’’ and ‘‘Does volunteering or active participation in CSR yield greater benefits for involved employees in terms of their motivation, job satisfaction and identification?’’.", "title": "" }, { "docid": "5cf444f83a8b4b3f9482e18cea796348", "text": "This paper investigates L-shaped iris (LSI) embedded in substrate integrated waveguide (SIW) structures. A lumped element equivalent circuit is utilized to thoroughly discuss the iris behavior in a wide frequency band. This structure has one more degree of freedom and design parameter compared with the conventional iris structures; therefore, it enables design flexibility with enhanced performance. The LSI is utilized to realize a two-pole evanescent-mode filter with an enhanced stopband and a dual-band filter combining evanescent and ordinary modes excitation. Moreover, a prescribed filtering function is demonstrated using the lumped element analysis not only including evanescent-mode pole, but also close-in transmission zero. The proposed LSI promises to substitute the conventional posts in (SIW) filter design.", "title": "" }, { "docid": "09c19ae7eea50f269ee767ac6e67827b", "text": "In the last years Python has gained more and more traction in the scientific community. Projects like NumPy, SciPy, and Matplotlib have created a strong foundation for scientific computing in Python and machine learning packages like scikit-learn or packages for data analysis like Pandas are building on top of it. In this paper we present Wyrm ( https://github.com/bbci/wyrm ), an open source BCI toolbox in Python. Wyrm is applicable to a broad range of neuroscientific problems. It can be used as a toolbox for analysis and visualization of neurophysiological data and in real-time settings, like an online BCI application. In order to prevent software defects, Wyrm makes extensive use of unit testing. We will explain the key aspects of Wyrm’s software architecture and design decisions for its data structure, and demonstrate and validate the use of our toolbox by presenting our approach to the classification tasks of two different data sets from the BCI Competition III. Furthermore, we will give a brief analysis of the data sets using our toolbox, and demonstrate how we implemented an online experiment using Wyrm. With Wyrm we add the final piece to our ongoing effort to provide a complete, free and open source BCI system in Python.", "title": "" }, { "docid": "a3b919ee9780c92668c0963f23983f82", "text": "A terrified woman called police because her ex-boyfriend was breaking into her home. Upon arrival, police heard screams coming from the basement. They stopped halfway down the stairs and found the ex-boyfriend pointing a rifle at the floor. Officers observed a strange look on the subject’s face as he slowly raised the rifle in their direction. Both officers fired their weapons, killing the suspect. The rifle was not loaded.", "title": "" }, { "docid": "b90ec3edc349a98c41d1106b3c6628ba", "text": "Conventional speech recognition system is constructed by unfolding the spectral-temporal input matrices into one-way vectors and using these vectors to estimate the affine parameters of neural network according to the vector-based error backpropagation algorithm. System performance is constrained because the contextual correlations in frequency and time horizons are disregarded and the spectral and temporal factors are excluded. This paper proposes a spectral-temporal factorized neural network (STFNN) to tackle this weakness. The spectral-temporal structure is preserved and factorized in hidden layers through two ways of factor matrices which are trained by using the factorized error backpropagation. Affine transformation in standard neural network is generalized to the spectro-temporal factorization in STFNN. The structural features or patterns are extracted and forwarded towards the softmax outputs. A deep neural factorization is built by cascading a number of factorization layers with fully-connected layers for speech recognition. An orthogonal constraint is imposed in factor matrices for redundancy reduction. Experimental results show the merit of integrating the factorized features in deep feedforward and recurrent neural networks for speech recognition.", "title": "" }, { "docid": "2802d66dfa1956bf83649614b76d470e", "text": "Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work best. In particular, we begin with the baseline of random presentation and then examine combinations of several mechanisms: the indication of an example’s relative difficulty, the use of the shaping heuristic from the cognitive science literature (moving from easier examples to harder ones), and a novel kernel-based “coverage model” of the subject’s mastery of the task. From our experiments on 54 human subjects learning and performing a pair of synthetic classification tasks via our teaching system, we found that we can achieve the greatest gains with a combination of shaping and the coverage model.", "title": "" }, { "docid": "26bc2aa9b371e183500e9c979c1fff65", "text": "Complex regional pain syndrome (CRPS) is clinically characterized by pain, abnormal regulation of blood flow and sweating, edema of skin and subcutaneous tissues, trophic changes of skin, appendages of skin and subcutaneous tissues, and active and passive movement disorders. It is classified into type I (previously reflex sympathetic dystrophy) and type II (previously causalgia). Based on multiple evidence from clinical observations, experimentation on humans, and experimentation on animals, the hypothesis has been put forward that CRPS is primarily a disease of the central nervous system. CRPS patients exhibit changes which occur in somatosensory systems processing noxious, tactile and thermal information, in sympathetic systems innervating skin (blood vessels, sweat glands), and in the somatomotor system. This indicates that the central representations of these systems are changed and data show that CRPS, in particular type I, is a systemic disease involving these neuronal systems. This way of looking at CRPS shifts the attention away from interpreting the syndrome conceptually in a narrow manner and to reduce it to one system or to one mechanism only, e. g., to sympathetic-afferent coupling. It will further our understanding why CRPS type I may develop after a trivial trauma, after a trauma being remote from the affected extremity exhibiting CRPS, and possibly after immobilization of an extremity. It will explain why, in CRPS patients with sympathetically maintained pain, a few temporary blocks of the sympathetic innervation of the affected extremity sometimes lead to long-lasting (even permanent) pain relief and to resolution of the other changes observed in CRPS. This changed view will bring about a diagnostic reclassification and redefinition of CRPS and will have bearings on the therapeutic approaches. Finally it will shift the focus of research efforts.", "title": "" }, { "docid": "4c39ff8119ddc75213251e7321c7e795", "text": "Building and debugging distributed software remains extremely difficult. We conjecture that by adopting a data-centric approach to system design and by employing declarative programming languages, a broad range of distributed software can be recast naturally in a data-parallel programming model. Our hope is that this model can significantly raise the level of abstraction for programmers, improving code simplicity, speed of development, ease of software evolution, and program correctness.\n This paper presents our experience with an initial large-scale experiment in this direction. First, we used the Overlog language to implement a \"Big Data\" analytics stack that is API-compatible with Hadoop and HDFS and provides comparable performance. Second, we extended the system with complex distributed features not yet available in Hadoop, including high availability, scalability, and unique monitoring and debugging facilities. We present both quantitative and anecdotal results from our experience, providing some concrete evidence that both data-centric design and declarative languages can substantially simplify distributed systems programming.", "title": "" }, { "docid": "ccc70871f57f25da6141a7083bdf5174", "text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap", "title": "" }, { "docid": "a346607a5e2e6c48e07e3e34a2ec7b0d", "text": "The development and professionalization of a video game requires tools for analyzing the practice of the players and teams, their tactics and strategies. These games are very popular and by nature numerical, they provide many tracks that we analyzed in terms of team play. We studied Defense of the Ancients (DotA), a Multiplayer Online Battle Arena (MOBA), where two teams battle in a game very similar to rugby or American football. Through topological measures – area of polygon described by the players, inertia, diameter, distance to the base – that are independent of the exact nature of the game, we show that the outcome of the match can be relevantly predicted. Mining e-sport’s tracks is opening interest in further application of these tools for analyzing real time sport. © 2014. Published by Elsevier B.V. Selection and/or peer review under responsibility of American Applied Science Research Institute", "title": "" }, { "docid": "616b6db46d3a01730c3ea468b0a03fc5", "text": "We demonstrate the surprising strength of unimodal baselines in multimodal domains, and make concrete recommendations for best practices in future research. Where existing work often compares against random or majority class baselines, we argue that unimodal approaches better capture and reflect dataset biases and therefore provide an important comparison when assessing the performance of multimodal techniques. We present unimodal ablations on three recent datasets in visual navigation and QA, seeing an up to 29% absolute gain in performance over published baselines.", "title": "" }, { "docid": "119c20c537f833731965e0d8aeba0964", "text": "The literature on Inverse Reinforcement Learning (IRL) typically assumes that humans take actions in order to minimize the expected value of a cost function, i.e., that humans are risk neutral. Yet, in practice, humans are often far from being risk neutral. To fill this gap, the objective of this paper is to devise a framework for risk-sensitive IRL in order to explicitly account for a human’s risk sensitivity. To this end, we propose a flexible class of models based on coherent risk measures, which allow us to capture an entire spectrum of risk preferences from risk-neutral to worst-case. We propose efficient non-parametric algorithms based on linear programming and semi-parametric algorithms based on maximum likelihood for inferring a human’s underlying risk measure and cost function for a rich class of static and dynamic decision-making settings. The resulting approach is demonstrated on a simulated driving game with ten human participants. Our method is able to infer and mimic a wide range of qualitatively different driving styles from highly risk-averse to risk-neutral in a data-efficient manner. Moreover, comparisons of the Risk-Sensitive (RS) IRL approach with a risk-neutral model show that the RS-IRL framework more accurately captures observed participant behavior both qualitatively and quantitatively, especially in scenarios where catastrophic outcomes such as collisions can occur.", "title": "" }, { "docid": "bb815929889d93e19c6581c3f9a0b491", "text": "This paper presents an HMM-MLP hybrid system to recognize complex date images written on Brazilian bank cheques. The system first segments implicitly a date image into sub-fields through the recognition process based on an HMM-based approach. Afterwards, the three obligatory date sub-fields are processed by the system (day, month and year). A neural approach has been adopted to work with strings of digits and a Markovian strategy to recognize and verify words. We also introduce the concept of meta-classes of digits, which is used to reduce the lexicon size of the day and year and improve the precision of their segmentation and recognition. Experiments show interesting results on date recognition.", "title": "" }, { "docid": "2f2e5d62475918dc9cfd54522f480a11", "text": "In this paper, we propose a very simple and elegant patch-based, machine learning technique for image denoising using the higher order singular value decomposition (HOSVD). The technique simply groups together similar patches from a noisy image (with similarity defined by a statistically motivated criterion) into a 3D stack, computes the HOSVD coefficients of this stack, manipulates these coefficients by hard thresholding, and inverts the HOSVD transform to produce the final filtered image. Our technique chooses all required parameters in a principled way, relating them to the noise model. We also discuss our motivation for adopting the HOSVD as an appropriate transform for image denoising. We experimentally demonstrate the excellent performance of the technique on grayscale as well as color images. On color images, our method produces state-of-the-art results, outperforming other color image denoising algorithms at moderately high noise levels. A criterion for optimal patch-size selection and noise variance estimation from the residual images (after denoising) is also presented.", "title": "" }, { "docid": "b84d8b711738bbd889a3a88ba82f45c0", "text": "Transmission over wireless channel is challenging. As such, different application required different signal processing approach of radio system. So, a highly reconfigurable radio system is on great demand as the traditional fixed and embedded radio system are not viable to cater the needs for frequently change requirements of wireless communication. A software defined radio or better known as an SDR, is a software-based radio platform that offers flexibility to deliver the highly reconfigurable system requirements. This approach allows a different type of communication system requirements such as standard, protocol, or signal processing method, to be deployed by using the same set of hardware and software such as USRP and GNU Radio respectively. For researchers, this approach has opened the door to extend their studies in simulation domain into experimental domain. However, the realization of SDR concept is inherently limited by the analog components of the hardware being used. Despite that, the implementation of SDR is still new yet progressing, thus, this paper intends to provide an insight about its viability as a high re-configurable platform for communication system. This paper presents the SDR-based transceiver of common digital modulation system by means of GNU Radio and USRP.", "title": "" }, { "docid": "60a655d6b6d79f55151e871d2f0d4d34", "text": "The clinical characteristics of drug hypersensitivity reactions are very heterogeneous as drugs can actually elicit all types of immune reactions. The majority of allergic reactions involve either drug-specific IgE or T cells. Their stimulation leads to quite distinct immune responses, which are classified according to Gell and Coombs. Here, an extension of this subclassification, which considers the distinct T-cell functions and immunopathologies, is presented. These subclassifications are clinically useful, as they require different treatment and diagnostic steps. Copyright © 2007 S. Karger AG, Basel", "title": "" }, { "docid": "d80d52806cbbdd6148e3db094eabeed7", "text": "We decided to test a surprisingly simple hypothesis; namely, that the relationship between an image of a scene and the chromaticity of scene illumination could be learned by a neural network. The thought was that if this relationship could be extracted by a neural network, then the trained network would be able to determine a scene's illumination from its image, which would then allow correction of the image colors to those relative to a standard illuminant, thereby providing color constancy. Using a database of surface reflectances and illuminants, along with the spectral sensitivity functions of our camera, we generated thousands of images of randomly selected illuminants lighting `scenes' of 1 to 60 randomly selected reflectances. During the learning phase the network is provided the image data along with the chromaticity of its illuminant. After training, the network outputs (very quickly) the chromaticity of the illumination given only the image data. We obtained surprisingly good estimates of he ambient illumination lighting from the network even when applied to scenes in our lab that were completely unrelated to the training data.", "title": "" }, { "docid": "3e0f74c880165b5147864dfaa6a75c11", "text": "Traditional hollow metallic waveguide manufacturing techniques are readily capable of producing components with high-precision geometric tolerances, yet generally lack the ability to customize individual parts on demand or to deliver finished components with low lead times. This paper proposes a Rapid-Prototyping (RP) method for relatively low-loss millimeter-wave hollow waveguides produced using consumer-grade stere-olithographic (SLA) Additive Manufacturing (AM) technology, in conjunction with an electroless metallization process optimized for acrylate-based photopolymer substrates. To demonstrate the capabilities of this particular AM process, waveguide prototypes are fabricated for the W- and D-bands. The measured insertion loss at W-band is between 0.12 dB/in to 0.25 dB/in, corresponding to a mean value of 0.16 dB/in. To our knowledge, this is the lowest insertion loss figure presented to date, when compared to other W-Band AM waveguide designs reported in the literature. Printed D-band waveguide prototypes exhibit a transducer loss of 0.26 dB/in to 1.01 dB/in, with a corresponding mean value of 0.65 dB/in, which is similar performance to a commercial metal waveguide.", "title": "" } ]
scidocsrr
d2c5e7e28483513056efb2c69fc35df9
SQL-IDS: a specification-based approach for SQL-injection detection
[ { "docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54", "text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.", "title": "" }, { "docid": "5025766e66589289ccc31e60ca363842", "text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.", "title": "" } ]
[ { "docid": "e58036f93195603cb7dc7265b9adeb25", "text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.", "title": "" }, { "docid": "188ab32548b91fd1bf1edf34ff3d39d9", "text": "With the marvelous development of wireless techniques and ubiquitous deployment of wireless systems indoors, myriad indoor location-based services (ILBSs) have permeated into numerous aspects of modern life. The most fundamental functionality is to pinpoint the location of the target via wireless devices. According to how wireless devices interact with the target, wireless indoor localization schemes roughly fall into two categories: device based and device free. In device-based localization, a wireless device (e.g., a smartphone) is attached to the target and computes its location through cooperation with other deployed wireless devices. In device-free localization, the target carries no wireless devices, while the wireless infrastructure deployed in the environment determines the target’s location by analyzing its impact on wireless signals.\n This article is intended to offer a comprehensive state-of-the-art survey on wireless indoor localization from the device perspective. In this survey, we review the recent advances in both modes by elaborating on the underlying wireless modalities, basic localization principles, and data fusion techniques, with special emphasis on emerging trends in (1) leveraging smartphones to integrate wireless and sensor capabilities and extend to the social context for device-based localization, and (2) extracting specific wireless features to trigger novel human-centric device-free localization. We comprehensively compare each scheme in terms of accuracy, cost, scalability, and energy efficiency. Furthermore, we take a first look at intrinsic technical challenges in both categories and identify several open research issues associated with these new challenges.", "title": "" }, { "docid": "bbd64fe2f05e53ca14ad1623fe51cd1c", "text": "Virtual assistants are the cutting edge of end user interaction, thanks to endless set of capabilities across multiple services. The natural language techniques thus need to be evolved to match the level of power and sophistication that users expect from virtual assistants. In this report we investigate an existing deep learning model for semantic parsing, and we apply it to the problem of converting natural language to trigger-action programs for the Almond virtual assistant. We implement a one layer seq2seq model with attention layer, and experiment with grammar constraints and different RNN cells. We take advantage of its existing dataset and we experiment with different ways to extend the training set. Our parser shows mixed results on the different Almond test sets, performing better than the state of the art on synthetic benchmarks by about 10% but poorer on realistic user data by about 15%. Furthermore, our parser is shown to be extensible to generalization, as well as or better than the current system employed by Almond.", "title": "" }, { "docid": "38935c773fb3163a1841fcec62b3e15a", "text": "We investigate how neural networks can learn and process languages with hierarchical, compositional semantics. To this end, we define the artificial task of processing nested arithmetic expressions, and study whether different types of neural networks can learn to compute their meaning. We find that recursive neural networks can implement a generalising solution to this problem, and we visualise this solution by breaking it up in three steps: project, sum and squash. As a next step, we investigate recurrent neural networks, and show that a gated recurrent unit, that processes its input incrementally, also performs very well on this task: the network learns to predict the outcome of the arithmetic expressions with high accuracy, although performance deteriorates somewhat with increasing length. To develop an understanding of what the recurrent network encodes, visualisation techniques alone do not suffice. Therefore, we develop an approach where we formulate and test multiple hypotheses on the information encoded and processed by the network. For each hypothesis, we derive predictions about features of the hidden state representations at each time step, and train ‘diagnostic classifiers’ to test those predictions. Our results indicate that the networks follow a strategy similar to our hypothesised ‘cumulative strategy’, which explains the high accuracy of the network on novel expressions, the generalisation to longer expressions than seen in training, and the mild deterioration with increasing length. This is turn shows that diagnostic classifiers can be a useful technique for opening up the black box of neural networks. We argue that diagnostic classification, unlike most visualisation techniques, does scale up from small networks in a toy domain, to larger and deeper recurrent networks dealing with real-life data, and may therefore contribute to a better understanding of the internal dynamics of current state-of-the-art models in natural language processing.", "title": "" }, { "docid": "bd8b0a2b060594d8513f43fbfe488443", "text": "Part 1 of the paper presents the detection and sizing capability based on image display of sectorial scan. Examples are given for different types of weld defects: toe cracks, internal porosity, side-wall lack of fusion, underbead crack, inner-surface breaking cracks, slag inclusions, incomplete root penetration and internal cracks. Based on combination of S-scan and B-scan plotted into 3-D isometric part, the defect features could be reconstructed and measured into a draft package. Comparison between plotted data and actual defect sizes are also presented.", "title": "" }, { "docid": "a0a73cc2b884828eb97ff8045bfe50a6", "text": "A variety of antennas have been engineered with metamaterials (MTMs) and metamaterial-inspired constructs to improve their performance characteristics. Examples include electrically small, near-field resonant parasitic (NFRP) antennas that require no matching network and have high radiation efficiencies. Experimental verification of their predicted behaviors has been obtained. Recent developments with this NFRP electrically small paradigm will be reviewed. They include considerations of increased bandwidths, as well as multiband and multifunctional extensions.", "title": "" }, { "docid": "64a345ae00db3b84fb254725bf14edb7", "text": "The research interest in unmanned aerial vehicles (UAV) has grown rapidly over the past decade. UAV applications range from purely scientific over civil to military. Technical advances in sensor and signal processing technologies enable the design of light weight and economic airborne platforms. This paper presents a complete mechatronic design process of a quadrotor UAV, including mechanical design, modeling of quadrotor and actuator dynamics and attitude stabilization control. Robust attitude estimation is achieved by fusion of low-cost MEMS accelerometer and gyroscope signals with a Kalman filter. Experiments with a gimbal mounted quadrotor testbed allow a quantitative analysis and comparision of the PID and Integral-Backstepping (IB) controller design for attitude stabilization with respect to reference signal tracking, disturbance rejection and robustness.", "title": "" }, { "docid": "6097315ac2e4475e8afd8919d390babf", "text": "This paper presents an origami-inspired technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to design robots as origami structures introduces a fast and low-cost fabrication method to modern, real-world robotic applications. We employ laser-machined origami patterns to build a new class of robotic systems for mobility and manipulation. Origami robots use only a flat sheet as the base structure for building complicated bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.", "title": "" }, { "docid": "dc71729ebd3c2a66c73b16685c8d12af", "text": "A list of related materials, with annotations to guide further exploration of the article's ideas and applications 11 Further Reading A company's bid to rally an industry ecosystem around a new competitive view is an uncertain gambit. But the right strategic approaches and the availability of modern digital infrastructures improve the odds for success.", "title": "" }, { "docid": "6384a691d3b50e252ab76a61e28f012e", "text": "We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity.\n When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular.\n When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.", "title": "" }, { "docid": "104c9ef558234250d56ef941f09d6a7c", "text": "The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus", "title": "" }, { "docid": "ca94b1bb1f4102ed6b4506441b2431fc", "text": "It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms.", "title": "" }, { "docid": "322f6321bc34750344064d474206fddb", "text": "BACKGROUND AND PURPOSE\nThis study was undertaken to elucidate whether and how age influences stroke outcome.\n\n\nMETHODS\nThis prospective and community-based study comprised 515 consecutive acute stroke patients. Computed tomographic scan was performed in 79% of patients. Activities of daily living (ADL) and neurological status were assessed weekly during hospital stay using the Barthel Index (BI) and the Scandinavian Stroke Scale (SSS), respectively. Information regarding social condition and comorbidity before stroke was also registered. A multiple regression model was used to analyze the independent influence of age on stroke outcome.\n\n\nRESULTS\nAge was not related to the type of stroke lesion or infarct size. However, age independently influenced initial BI (-4 points per 10 years, P < .01), initial SSS (-2 points per 10 years, P = .01), and discharge BI (-3 points per 10 years, P < .01). No independent influence of age was found regarding mortality within 3 months, discharge SSS, length of hospital stay, and discharge placement. ADL improvement was influenced independently by age (-3 points per 10 years, P < .01), whereas age had no influence on neurological improvement or on speed of recovery.\n\n\nCONCLUSIONS\nAge independently influences stroke outcome selectively in ADL-related aspects (BI) but not in neurological aspects (SSS), suggesting a poorer compensatory ability in elderly stroke patients. Therefore, rehabilitation of elderly stroke patients should be focused more on ADL and compensation rather than on the recovery of neurological status, and age itself should not be a selection criterion for rehabilitation.", "title": "" }, { "docid": "7448b45dd5809618c3b6bb667cb1004f", "text": "We first provide criteria for assessing informed consent online. Then we examine how cookie technology and Web browser designs have responded to concerns about informed consent. Specifically, we document relevant design changes in Netscape Navigator and Internet Explorer over a 5-year period, starting in 1995. Our retrospective analyses leads us to conclude that while cookie technology has improved over time regarding informed consent, some startling problems remain. We specify six of these problems and offer design remedies. This work fits within the emerging field of Value-Sensitive Design.", "title": "" }, { "docid": "4e8c39eaa7444158a79573481b80a77f", "text": "Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.", "title": "" }, { "docid": "5fd2d67291f7957eee20495c5baeb1ef", "text": "Many interesting real-world textures are inhomogeneous and/or anisotropic. An inhomogeneous texture is one where various visual properties exhibit significant changes across the texture’s spatial domain. Examples include perceptible changes in surface color, lighting, local texture pattern and/or its apparent scale, and weathering effects, which may vary abruptly, or in a continuous fashion. An anisotropic texture is one where the local patterns exhibit a preferred orientation, which also may vary across the spatial domain. While many example-based texture synthesis methods can be highly effective when synthesizing uniform (stationary) isotropic textures, synthesizing highly non-uniform textures, or ones with spatially varying orientation, is a considerably more challenging task, which so far has remained underexplored. In this paper, we propose a new method for automatic analysis and controlled synthesis of such textures. Given an input texture exemplar, our method generates a source guidance map comprising: (i) a scalar progression channel that attempts to capture the low frequency spatial changes in color, lighting, and local pattern combined, and (ii) a direction field that captures the local dominant orientation of the texture. Having augmented the texture exemplar with this guidance map, users can exercise better control over the synthesized result by providing easily specified target guidance maps, which are used to constrain the synthesis process.", "title": "" }, { "docid": "763372dc4ebc2cd972a5b851be014bba", "text": "Parametric piecewise-cubic functions are used throughout the computer graphics industry to represent curved shapes. For many applications, it would be useful to be able to reliably derive this representation from a closely spaced set of points that approximate the desired curve, such as the input from a digitizing tablet or a scanner. This paper presents a solution to the problem of automatically generating efficient piecewise parametric cubic polynomial approximations to shapes from sampled data. We have developed an algorithm that takes a set of sample points, plus optional endpoint and tangent vector specifications, and iteratively derives a single parametric cubic polynomial that lies close to the data points as defined by an error metric based on least-squares. Combining this algorithm with dynamic programming techniques to determine the knot placement gives good results over a range of shapes and applications.", "title": "" }, { "docid": "221541e0ef8cf6cd493843fd53257a62", "text": "Content-based shape retrieval techniques can facilitate 3D model resource reuse, 3D model modeling, object recognition, and 3D content classification. Recently more and more researchers have attempted to solve the problems of partial retrieval in the domain of computer graphics, vision, CAD, and multimedia. Unfortunately, in the literature, there is little comprehensive discussion on the state-of-the-art methods of partial shape retrieval. In this article we focus on reviewing the partial shape retrieval methods over the last decade, and help novices to grasp latest developments in this field. We first give the definition of partial retrieval and discuss its desirable capabilities. Secondly, we classify the existing methods on partial shape retrieval into three classes by several criteria, describe the main ideas and techniques for each class, and detailedly compare their advantages and limits. We also present several relevant 3D datasets and corresponding evaluation metrics, which are necessary for evaluating partial retrieval performance. Finally, we discuss possible research directions to address partial shape retrieval.", "title": "" }, { "docid": "bf14f996f9013351aca1e9935157c0e3", "text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.", "title": "" }, { "docid": "f37d9a57fd9100323c70876cf7a1d7ad", "text": "Neural networks encounter serious catastrophic forgetting when information is learned sequentially, which is unacceptable for both a model of human memory and practical engineering applications. In this study, we propose a novel biologically inspired dual-network memory model that can significantly reduce catastrophic forgetting. The proposed model consists of two distinct neural networks: hippocampal and neocortical networks. Information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network. In the hippocampal network, chaotic behavior of neurons in the CA3 region of the hippocampus and neuronal turnover in the dentate gyrus region are introduced. Chaotic recall by CA3 enables retrieval of stored information in the hippocampal network. Thereafter, information retrieved from the hippocampal network is interleaved with previously stored information and consolidated by using pseudopatterns in the neocortical network. The computer simulation results show the effectiveness of the proposed dual-network memory model. & 2014 Elsevier B.V. All rights reserved.", "title": "" } ]
scidocsrr
f649e6aff9c45d19a82cf43afa2a6cb6
Joint virtual machine and bandwidth allocation in software defined network (SDN) and cloud computing environments
[ { "docid": "7544daa81ddd9001772d48846e3097c3", "text": "In cloud computing, cloud providers can offer cloud consumers two provisioning plans for computing resources, namely reservation and on-demand plans. In general, cost of utilizing computing resources provisioned by reservation plan is cheaper than that provisioned by on-demand plan, since cloud consumer has to pay to provider in advance. With the reservation plan, the consumer can reduce the total resource provisioning cost. However, the best advance reservation of resources is difficult to be achieved due to uncertainty of consumer's future demand and providers' resource prices. To address this problem, an optimal cloud resource provisioning (OCRP) algorithm is proposed by formulating a stochastic programming model. The OCRP algorithm can provision computing resources for being used in multiple provisioning stages as well as a long-term plan, e.g., four stages in a quarter plan and twelve stages in a yearly plan. The demand and price uncertainty is considered in OCRP. In this paper, different approaches to obtain the solution of the OCRP algorithm are considered including deterministic equivalent formulation, sample-average approximation, and Benders decomposition. Numerical studies are extensively performed in which the results clearly show that with the OCRP algorithm, cloud consumer can successfully minimize total cost of resource provisioning in cloud computing environments.", "title": "" } ]
[ { "docid": "8cfa2086e1c73bae6945d1a19d52be26", "text": "We present a unified dynamics framework for real-time visual effects. Using particles connected by constraints as our fundamental building block allows us to treat contact and collisions in a unified manner, and we show how this representation is flexible enough to model gases, liquids, deformable solids, rigid bodies and cloth with two-way interactions. We address some common problems with traditional particle-based methods and describe a parallel constraint solver based on position-based dynamics that is efficient enough for real-time applications.", "title": "" }, { "docid": "5e7d5a86a007efd5d31e386c862fef5c", "text": "This systematic review examined the published scientific research on the psychosocial impact of cleft lip and palate (CLP) among children and adults. The primary objective of the review was to determine whether having CLP places an individual at greater risk of psychosocial problems. Studies that examined the psychosocial functioning of children and adults with repaired non-syndromal CLP were suitable for inclusion. The following sources were searched: Medline (January 1966-December 2003), CINAHL (January 1982-December 2003), Web of Science (January 1981-December 2003), PsycINFO (January 1887-December 2003), the reference section of relevant articles, and hand searches of relevant journals. There were 652 abstracts initially identified through database and other searches. On closer examination of these, only 117 appeared to meet the inclusion criteria. The full text of these papers was examined, with only 64 articles finally identified as suitable for inclusion in the review. Thirty of the 64 studies included a control group. The studies were longitudinal, cross-sectional, or retrospective in nature.Overall, the majority of children and adults with CLP do not appear to experience major psychosocial problems, although some specific problems may arise. For example, difficulties have been reported in relation to behavioural problems, satisfaction with facial appearance, depression, and anxiety. A few differences between cleft types have been found in relation to self-concept, satisfaction with facial appearance, depression, attachment, learning problems, and interpersonal relationships. With a few exceptions, the age of the individual with CLP does not appear to influence the occurrence or severity of psychosocial problems. However, the studies lack the uniformity and consistency required to adequately summarize the psychosocial problems resulting from CLP.", "title": "" }, { "docid": "6720ae7a531d24018bdd1d3d1c7eb28b", "text": "This study investigated the effects of mobile phone text-messaging method (predictive and multi-press) and experience (in texters and non-texters) on children’s textism use and understanding. It also examined popular claims that the use of text-message abbreviations, or textese spelling, is associated with poor literacy skills. A sample of 86 children aged 10 to 12 years read and wrote text messages in conventional English and in textese, and completed tests of spelling, reading, and non-word reading. Children took significantly longer, and made more errors, when reading messages written in textese than in conventional English. Further, they were no faster at writing messages in textese than in conventional English, regardless of texting method or experience. Predictive texters were faster at reading and writing messages than multi-press texters, and texting experience increased writing, but not reading, speed. General spelling and reading scores did not differ significantly with usual texting method. However, better literacy skills were associated with greater textese reading speed and accuracy. These findings add to the growing evidence for a positive relationship between texting proficiency and traditional literacy skills. Children’s text-messaging and literacy skills 3 The advent of mobile phones, and of text-messaging in particular, has changed the way that people communicate, and adolescents and children seem especially drawn to such technology. Australian surveys have revealed that 19% of 8to 11-year-olds and 76% of 12to 14-year-olds have their own mobile phone (Cupitt, 2008), and that 69% of mobile phone users aged 14 years and over use text-messaging (Australian Government, 2008), with 90% of children in Grades 7-12 sending a reported average of 11 texts per week (ABS, 2008). Text-messaging has also been the catalyst for a new writing style: textese. Described as a hybrid of spoken and written English (Plester & Wood, 2009), textese is a largely soundbased, or phonological, form of spelling that can reduce the time and cost of texting (Leung, 2007). Common abbreviations, or textisms, include letter and number homophones (c for see, 2 for to), contractions (txt for text), and non-conventional spellings (skool for school) (Plester, Wood, & Joshi, 2009; Thurlow, 2003). Estimates of the proportion of textisms that children use in their messages range from 21-47% (increasing with age) in naturalistic messages (Wood, Plester, & Bowyer, 2009), to 34% for messages elicited by a given scenario (Plester et al., 2009), to 50-58% for written messages that children ‘translated’ to and from textese (Plester, Wood, & Bell, 2008). One aim of the current study was to examine the efficiency of using textese for both the message writer and the reader, in order to understand the reasons behind (Australian) children’s use of textisms. The spread of textese has been attributed to texters’ desire to overcome the confines of the alphanumeric mobile phone keypad (Crystal, 2008). Since several letters are assigned to each number, the multi-press style of texting requires the somewhat laborious pressing of the same button one to four times to type each letter (Taylor & Vincent, 2005). The use of textese thus has obvious savings for multi-press texters, of both time and screen-space (as message character count cannot exceed 160). However, there is evidence, discussed below, that reading textese can be relatively slow and difficult for the message recipient, compared to Children’s text-messaging and literacy skills 4 reading conventional English. Since the use of textese is now widespread, it is important to examine the potential advantages and disadvantages that this form of writing may have for message senders and recipients, especially children, whose knowledge of conventional English spelling is still developing. To test the potential advantages of using textese for multi-press texters, Neville (2003) examined the speed and accuracy of textese versus conventional English in writing and reading text messages. British girls aged 11-16 years were dictated two short passages to type into a mobile phone: one using conventional English spelling, and the other “as if writing to a friend”. They also read two messages aloud from the mobile phone, one in conventional English, and the other in textese. The proportion of textisms produced is not reported, but no differences in textese use were observed between texters and non-texters. Writing time was significantly faster for textese than conventional English messages, with greater use of textisms significantly correlated with faster message typing times. However, participants were significantly faster at reading messages written in conventional English than in textese, regardless of their usual texting frequency. Kemp (2010) largely followed Neville’s (2003) design, but with 61 Australian undergraduates (mean age 22 years), all regular texters. These adults, too, were significantly faster at writing, but slower at reading, messages written in textese than in conventional English, regardless of their usual messaging frequency. Further, adults also made significantly more reading errors for messages written in textese than conventional English. These findings converge on the important conclusion that while the use of textisms makes writing more efficient for the message sender, it costs the receiver more time to read it. However, both Neville (2003) and Kemp (2010) examined only multi-press method texting, and not the predictive texting method now also available. Predictive texting requires only a single key-press per letter, and a dictionary-based system suggests one or more likely words Children’s text-messaging and literacy skills 5 based on the combinations entered (Taylor & Vincent, 2005). Textese may be used less by predictive texters than multi-press texters for two reasons. Firstly, predictive texting requires fewer key-presses than multi-press texting, which reduces the need to save time by taking linguistic short-cuts. Secondly, the dictionary-based predictive system makes it more difficult to type textisms that are not pre-programmed into the dictionary. Predictive texting is becoming increasingly popular, with recent studies reporting that 88% of Australian adults (Kemp, in press), 79% of Australian 13to 15-year-olds (De Jonge & Kemp, in press) and 55% of British 10to 12-year-olds (Plester et al., 2009) now use this method. Another aim of this study was thus to compare the reading and writing of textese and conventional English messages in children using their typical input method: predictive or multi-press texting, as well as in children who do not normally text. Finally, this study sought to investigate the popular assumption that exposure to unconventional word spellings might compromise children’s conventional literacy skills (e.g., Huang, 2008; Sutherland, 2002), with media articles revealing widespread disapproval of this communication style (Thurlow, 2006). In contrast, some authors have suggested that the use of textisms might actually improve children’s literacy skills (e.g., Crystal, 2008). Many textisms commonly used by children rely on the ability to distinguish, blend, and/or delete letter sounds (Plester et al., 2008, 2009). Practice at reading and creating textisms may therefore lead to improved phonological awareness (Crystal, 2008), which consistently predicts both reading and spelling prowess (e.g., Bradley & Bryant, 1983; Lundberg, Frost, & Petersen, 1988). Alternatively, children who use more textisms may do so because they have better phonological awareness, or poorer spellers may be drawn to using textisms to mask weak spelling ability (e.g., Sutherland, 2002). Thus, studying children’s textism use can provide further information on the links between the component skills that constitute both conventional and alternative, including textism-based, literacy. Children’s text-messaging and literacy skills 6 There is evidence for a positive link between the use of textisms and literacy skills in preteen children. Plester et al. (2008) asked 10to 12-year-old British children to translate messages from standard English to textese, and vice versa, with pen and paper. They found a significant positive correlation between textese use and verbal reasoning scores (Study 1) and spelling scores (Study 2). Plester et al. (2009) elicited text messages from a similar group of children by asking them to write messages in response to a given scenario. Again, textism use was significantly positively associated with word reading ability and phonological awareness scores (although not with spelling scores). Neville (2003) found that the number of textisms written, and the number read accurately, as well as the speed with which both conventional and textese messages were read and written, all correlated significantly with general spelling skill in 11to 16-year-old girls. The cross-sectional nature of these studies, and of the current study, means that causal relationships cannot be firmly established. However, Wood et al. (2009) report on a longitudinal study in which 8to 12-year-old children’s use of textese at the beginning of the school year predicted their skills in reading ability and phonological awareness at the end of the year, even after controlling for verbal IQ. These results provide the first support for the idea that textism use is driving the development of literacy skills, and thus that this use of technology can improve learning in the area of language and literacy. Taken together, these findings also provide important evidence against popular media claims that the use of textese is harming children’s traditional literacy skills. No similar research has yet been published with children outside the UK. The aim of the current study was thus to examine the speed and proficiency of textese use in Australian 10to 12-year-olds and, for the first time, to compare the r", "title": "" }, { "docid": "764d6f45cd9dc08963a0e4d21b23d470", "text": "Implementing and fleshing out a number of psychological and neuroscience theories of cognition, the LIDA conceptual model aims at being a cognitive “theory of everything.” With modules or processes for perception, working memory, episodic memories, “consciousness,” procedural memory, action selection, perceptual learning, episodic learning, deliberation, volition, and non-routine problem solving, the LIDA model is ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems. The LIDA architecture is based on the LIDA cognitive cycle, a sort of “cognitive atom.” The more elementary cognitive modules and processes play a role in each cognitive cycle. Higher-level processes are performed over multiple cycles. In addition to giving a quick overview of the LIDA conceptual model, and its underlying computational technology, we argue for the LIDA architecture’s role as a foundational architecture for an AGI. Finally, lessons For AGI researchers drawn from the model and its architecture are discussed.", "title": "" }, { "docid": "47e06f5c195d2e1ecb6199b99ef1ee2d", "text": "We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned from the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newlycollected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.", "title": "" }, { "docid": "b1d534c6df789c45f636e69480517183", "text": "Virtual switches are a crucial component of SDN-based cloud systems, enabling the interconnection of virtual machines in a flexible and “software-defined” manner. This paper raises the alarm on the security implications of virtual switches. In particular, we show that virtual switches not only increase the attack surface of the cloud, but virtual switch vulnerabilities can also lead to attacks of much higher impact compared to traditional switches. We present a systematic security analysis and identify four design decisions which introduce vulnerabilities. Our findings motivate us to revisit existing threat models for SDN-based cloud setups, and introduce a new attacker model for SDN-based cloud systems using virtual switches. We demonstrate the practical relevance of our analysis using a case study with Open vSwitch and OpenStack. Employing a fuzzing methodology, we find several exploitable vulnerabilities in Open vSwitch. Using just one vulnerability we were able to create a worm that can compromise hundreds of servers in a matter of minutes. Our findings are applicable beyond virtual switches: NFV and high-performance fast path implementations face similar issues. This paper also studies various mitigation techniques and discusses how to redesign virtual switches for their integration. ∗Also with, Internet Network Architectures, TU Berlin. †Also with, Dept. of Computer Science, Aalborg University. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. SOSR’18, March 28-29, 2018, Los Angeles, CA, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to the Association for Computing Machinery. ACM ISBN .... . . $15.00 https://doi.org/...", "title": "" }, { "docid": "fc2a45aa3ec8e4d27b9fc1a86d24b86d", "text": "Information and Communication Technologies (ICT) rapidly migrate towards the Future Internet (FI) era, which is characterized, among others, by powerful and complex network infrastructures and innovative applications, services and content. An application area that attracts immense research interest is transportation. In particular, traffic congestions, emergencies and accidents reveal inefficiencies in transportation infrastructures, which can be overcome through the exploitation of ICT findings, in designing systems that are targeted at traffic / emergency management, namely Intelligent Transportation Systems (ITS). This paper considers the potential connection of vehicles to form vehicular networks that communicate with each other at an IP-based level, exchange information either directly or indirectly (e.g. through social networking applications and web communities) and contribute to a more efficient and green future world of transportation. In particular, the paper presents the basic research areas that are associated with the concept of Internet of Vehicles (IoV) and outlines the fundamental research challenges that arise there from.", "title": "" }, { "docid": "c62dfcc83ca24450ea1a7e12a17ac93e", "text": "Lymphedema and lipedema are chronic progressive disorders for which no causal therapy exists so far. Many general practitioners will rarely see these disorders with the consequence that diagnosis is often delayed. The pathophysiological basis is edematization of the tissues. Lymphedema involves an impairment of lymph drainage with resultant fluid build-up. Lipedema arises from an orthostatic predisposition to edema in pathologically increased subcutaneous tissue. Treatment includes complex physical decongestion by manual lymph drainage and absolutely uncompromising compression therapy whether it is by bandage in the intensive phase to reduce edema or with a flat knit compression stocking to maintain volume.", "title": "" }, { "docid": "17d927926f34efbdcb542c15fcf4e442", "text": "Automated Guided Vehicles (AGVs) are now becoming popular in automated materials handling systems, flexible manufacturing systems and even containers handling applications at seaports. In the past two decades, much research and many papers have been devoted to various aspects of the AGV technology and rapid progress has been witnessed. As one of the enabling technologies, scheduling and routing of AGVs have attracted considerable attention; many algorithms about scheduling and routing of AGVs have been proposed. However, most of the existing results are applicable to systems with small number of AGVs, offering low degree of concurrency. With drastically increased number of AGVs in recent applications (e.g. in the order of a hundred in a container terminal), efficient scheduling and routing algorithms are needed to resolve the increased contention of resources (e.g. path, loading and unloading buffers) among AGVs. Because they often employ regular route topologies, the new applications also demand innovative strategies to increase system performance. This survey paper first gives an account of the emergence of the problems of AGV scheduling and routing. It then differentiates them from several related problems, and surveys and classifies major existing algorithms for the problems. Noting the similarities with known problems in parallel and distributed systems, it suggests to apply analogous ideas in routing and scheduling AGVs. It concludes by pointing out fertile areas for future study.", "title": "" }, { "docid": "4b5ac4095cb2695a1e5282e1afca80a4", "text": "Threeexperimentsdocument that14-month-old infants’construalofobjects (e.g.,purple animals) is influenced by naming, that they can distinguish between the grammatical form noun and adjective, and that they treat this distinction as relevant to meaning. In each experiment, infants extended novel nouns (e.g., “This one is a blicket”) specifically to object categories (e.g., animal), and not to object properties (e.g., purple things). This robust noun–category link is related to grammatical form and not to surface differences in the presentation of novel words (Experiment 3). Infants’extensions of novel adjectives (e.g., “This one is blickish”) were more fragile: They extended adjectives specifically to object properties when the property was color (Experiment 1), but revealed a less precise mapping when the property was texture (Experiment 2). These results reveal that by 14 months, infants distinguish between grammatical forms and utilize these distinctions in determining the meaning of novel words.", "title": "" }, { "docid": "146387ae8853279d21f0b4c2f9b3e400", "text": "We address a class of manipulation problems where the robot perceives the scene with a depth sensor and can move its end effector in a space with six degrees of freedom – 3D position and orientation. Our approach is to formulate the problem as a Markov decision process (MDP) with abstract yet generally applicable state and action representations. Finding a good solution to the MDP requires adding constraints on the allowed actions. We develop a specific set of constraints called hierarchical SE(3) sampling (HSE3S) which causes the robot to learn a sequence of gazes to focus attention on the task-relevant parts of the scene. We demonstrate the effectiveness of our approach on three challenging pick-place tasks (with novel objects in clutter and nontrivial places) both in simulation and on a real robot, even though all training is done in simulation.", "title": "" }, { "docid": "3c631c249254a24d9343a971a05af74e", "text": "The selection of the new requirements which should be included in the development of the release of a software product is an important issue for software companies. This problem is known in the literature as the Next Release Problem (NRP). It is an NP-hard problem which simultaneously addresses two apparently contradictory objectives: the total cost of including the selected requirements in the next release of the software package, and the overall satisfaction of a set of customers who have different opinions about the priorities which should be given to the requirements, and also have different levels of importance within the company. Moreover, in the case of managing real instances of the problem, the proposed solutions have to satisfy certain interaction constraints which arise among some requirements. In this paper, the NRP is formulated as a multiobjective optimization problem with two objectives (cost and satisfaction) and three constraints (types of interactions). A multiobjective swarm intelligence metaheuristic is proposed to solve two real instances generated from data provided by experts. Analysis of the results showed that the proposed algorithm can efficiently generate high quality solutions. These were evaluated by comparing them with different proposals (in terms of multiobjective metrics). The results generated by the present approach surpass those generated in other relevant work in the literature (e.g. our technique can obtain a HV of over 60% for the most complex dataset managed, while the other approaches published cannot obtain an HV of more than 40% for the same dataset). 2015 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ce2a19f9f3ee13978845f1ede238e5b2", "text": "Optimised allocation of system architectures is a well researched area as it can greatly reduce the developmental cost of systems and increase performance and reliability in their respective applications. In conjunction with the recent shift from federated to integrated architectures in automotive, and the increasing complexity of computer systems, both in terms of software and hardware, the applications of design space exploration and optimised allocation of system architectures are of great interest. This thesis proposes a method to derive architectures and their allocations for systems with real-time constraints. The method implements integer linear programming to solve for an optimised allocation of system architectures according to a set of linear constraints while taking resource requirements, communication dependencies, and manual design choices into account. Additionally, this thesis describes and evaluates an industrial use case using the method wherein the timing characteristics of a system were evaluated, and, the method applied to simultaneously derive a system architecture, and, an optimised allocation of the system architecture. This thesis presents evidence and validations that suggest the viability of the method and its use case in an industrial setting. The work in this thesis sets precedence for future research and development, as well as future applications of the method in both industry and academia.", "title": "" }, { "docid": "7998670588bee1965fd5a18be9ccb0d9", "text": "In this letter, a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e., for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.", "title": "" }, { "docid": "099a2ee305b703a765ff3579f0e0c1c3", "text": "To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue that most of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.", "title": "" }, { "docid": "84dbdf4c145fc8213424f6d51550faa9", "text": "Because acute cholangitis sometimes rapidly progresses to a severe form accompanied by organ dysfunction, caused by the systemic inflammatory response syndrome (SIRS) and/or sepsis, prompt diagnosis and severity assessment are necessary for appropriate management, including intensive care with organ support and urgent biliary drainage in addition to medical treatment. However, because there have been no standard criteria for the diagnosis and severity assessment of acute cholangitis, practical clinical guidelines have never been established. The aim of this part of the Tokyo Guidelines is to propose new criteria for the diagnosis and severity assessment of acute cholangitis based on a systematic review of the literature and the consensus of experts reached at the International Consensus Meeting held in Tokyo 2006. Acute cholangitis can be diagnosed if the clinical manifestations of Charcot's triad, i.e., fever and/or chills, abdominal pain (right upper quadrant or epigastric), and jaundice are present. When not all of the components of the triad are present, then a definite diagnosis can be made if laboratory data and imaging findings supporting the evidence of inflammation and biliary obstruction are obtained. The severity of acute cholangitis can be classified into three grades, mild (grade I), moderate (grade II), and severe (grade III), on the basis of two clinical factors, the onset of organ dysfunction and the response to the initial medical treatment. \"Severe (grade III)\" acute cholangitis is defined as acute cholangitis accompanied by at least one new-onset organ dysfunction. \"Moderate (grade II)\" acute cholangitis is defined as acute cholangitis that is unaccompanied by organ dysfunction, but that does not respond to the initial medical treatment, with the clinical manifestations and/or laboratory data not improved. \"Mild (grade I)\" acute cholangitis is defined as acute cholangitis that responds to the initial medical treatment, with the clinical findings improved.", "title": "" }, { "docid": "5f54125c0114f4fadc055e721093a49e", "text": "In this study, a fuzzy logic based autonomous vehicle control system is designed and tested in The Open Racing Car Simulator (TORCS) environment. The aim of this study is that vehicle complete the race without to get any damage and to get out of the way. In this context, an intelligent control system composed of fuzzy logic and conventional control structures has been developed such that the racing car is able to compete the race autonomously. In this proposed structure, once the vehicle's gearshifts have been automated, a fuzzy logic based throttle/brake control system has been designed such that the racing car is capable to accelerate/decelerate in a realistic manner as well as to drive at desired velocity. The steering control problem is also handled to end up with a racing car that is capable to travel on the road even in the presence of sharp curves. In this context, we have designed a fuzzy logic based positioning system that uses the knowledge of the curvature ahead to determine an appropriate position. The game performance of the developed fuzzy logic systems can be observed from https://youtu.be/qOvEz3-PzRo.", "title": "" }, { "docid": "319ba1d449d2b65c5c58b5cc0fdbed67", "text": "This paper introduces a new technology and tools from the field of text-based information retrieval. The authors have developed – a fingerprint-based method for a highly efficient near similarity search, and – an application of this method to identify plagiarized passages in large document collections. The contribution of our work is twofold. Firstly, it is a search technology that enables a new quality for the comparative analysis of complex and large scientific texts. Secondly, this technology gives rise to a new class of tools for plagiarism analysis, since the comparison of entire books becomes computationally feasible. The paper is organized as follows. Section 1 gives an introduction to plagiarism delicts and related detection methods, Section 2 outlines the method of fuzzy-fingerprints as a means for near similarity search, and Section 3 shows our methods in action: It gives examples for near similarity search as well as plagiarism detection and discusses results from a comprehensive performance analyses. 1 Plagiarism Analysis Plagiarism is the act of claiming to be the author of material that someone else actually wrote (Encyclopædia Britannica 2005), and, with the ubiquitousness", "title": "" }, { "docid": "98911eead8eb90ca295425917f5cd522", "text": "We provide strong evidence from multiple tests that credit lines (CLs) play special roles in syndicated loan packages. We find that CLs are associated with lower interest rate spreads on institutional term loans (ITLs) in the same loan packages. CLs also help improve secondary market liquidity of ITLs. These effects are robust to within-firm-year analysis. Using Lehman Brothers bankruptcy as a quasi-natural experiment further confirms our conclusions. These findings support the Bank Specialness Hypothesis that banks play valuable roles in alleviating information problems and that CLs are one conduit for this specialness.", "title": "" } ]
scidocsrr
8beddac83b8e402fea1171c9f2825d94
TransmiR: a transcription factor–microRNA regulation database
[ { "docid": "b324860905b6d8c4b4a8429d53f2543d", "text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.", "title": "" } ]
[ { "docid": "ddb01f456d904151238ecf695483a2f4", "text": "If there were only one truth, you couldn't paint a hundred canvases on the same theme.", "title": "" }, { "docid": "ae59ef9772ea8f8277a2d91030bd6050", "text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.", "title": "" }, { "docid": "8a09944155d35b4d1229b0778baf58a4", "text": "The recent Omnidirectional MediA Format (OMAF) standard specifies delivery of 360° video content. OMAF supports only equirectangular (ERP) and cubemap projections and their region-wise packing with a limitation on video decoding capability to the maximum resolution of 4K (e.g., 4096x2048). Streaming of 4K ERP content allows only a limited viewport resolution, which is lower than the resolution of many current head-mounted displays (HMDs). In order to take the full advantage of those HMDs, this work proposes a specific mixed-resolution packing of 6K (6144x3072) ERP content and its realization in tile-based streaming, while complying with the 4K-decoding constraint and the High Efficiency Video Coding (HEVC) standard. Experimental results indicate that, using Zonal-PSNR test methodology, the proposed layout decreases the streaming bitrate up to 32% in terms of BD-rate, when compared to mixed-quality viewport-adaptive streaming of 4K ERP as an alternative solution.", "title": "" }, { "docid": "0343f1a0be08ff53e148ef2eb22aaf14", "text": "Tables are a ubiquitous form of communication. While everyone seems to know what a table is, a precise, analytical definition of “tabularity” remains elusive because some bureaucratic forms, multicolumn text layouts, and schematic drawings share many characteristics of tables. There are significant differences between typeset tables, electronic files designed for display of tables, and tables in symbolic form intended for information retrieval. Most past research has addressed the extraction of low-level geometric information from raster images of tables scanned from printed documents, although there is growing interest in the processing of tables in electronic form as well. Recent research on table composition and table analysis has improved our understanding of the distinction between the logical and physical structures of tables, and has led to improved formalisms for modeling tables. This review, which is structured in terms of generalized paradigms for table processing, indicates that progress on half-a-dozen specific research issues would open the door to using existing paper and electronic tables for database update, tabular browsing, structured information retrieval through graphical and audio interfaces, multimedia table editing, and platform-independent display.", "title": "" }, { "docid": "c29a5acf052aed206d7d7a9078e66ff9", "text": "Argumentation mining aims to automatically detect, classify and structure argumentation in text. Therefore, argumentation mining is an important part of a complete argumentation analyisis, i.e. understanding the content of serial arguments, their linguistic structure, the relationship between the preceding and following arguments, recognizing the underlying conceptual beliefs, and understanding within the comprehensive coherence of the specific topic. We present different methods to aid argumentation mining, starting with plain argumentation detection and moving forward to a more structural analysis of the detected argumentation. Different state-of-the-art techniques on machine learning and context free grammars are applied to solve the challenges of argumentation mining. We also highlight fundamental questions found during our research and analyse different issues for future research on argumentation mining.", "title": "" }, { "docid": "2136c0e78cac259106d5424a2985e5d7", "text": "Stylistic composition is a creative musical activity, in which students as well as renowned composers write according to the style of another composer or period. We describe and evaluate two computational models of stylistic composition, called Racchman-Oct2010 and Racchmaninof-Oct2010. The former is a constrained Markov model and the latter embeds this model in an analogy-based design system. Racchmaninof-Oct2010 applies a pattern discovery algorithm called SIACT and a perceptually validated formula for rating pattern importance, to guide the generation of a new target design from an existing source design. A listening study is reported concerning human judgments of music excerpts that are, to varying degrees, in the style of mazurkas by Frédédric Chopin (1810-1849). The listening study acts as an evaluation of the two computational models and a third, benchmark system called Experiments in Musical Intelligence (EMI). Judges’ responses indicate that some aspects of musical style, such as phrasing and rhythm, are being modeled effectively by our algorithms. Judgments are also used to identify areas for future improvements. We discuss the broader implications of this work for the fields of engineering and design, where there is potential to make use of our models of hierarchical repetitive structure. Data and code to accompany this paper are available from www.tomcollinsresearch.net", "title": "" }, { "docid": "aec0c79ea90de753a010abfb43dc3f59", "text": "Style transfer methods have achieved significant success in recent years with the use of convolutional neural networks. However, many of these methods concentrate on artistic style transfer with few constraints on the output image appearance. We address the challenging problem of transferring face texture from a style face image to a content face image in a photorealistic manner without changing the identity of the original content image. Our framework for face texture transfer (FaceTex) augments the prior work of MRF-CNN with a novel facial semantic regularization that incorporates a face prior regularization smoothly suppressing the changes around facial meso-structures (e.g eyes, nose and mouth) and a facial structure loss function which implicitly preserves the facial structure so that face texture can be transferred without changing the original identity. We demonstrate results on face images and compare our approach with recent state-of-the-art methods. Our results demonstrate superior texture transfer because of the ability to maintain the identity of the original face image.", "title": "" }, { "docid": "b2a0755176f20cd8ee2ca19c091d022d", "text": "Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot’s own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.", "title": "" }, { "docid": "17d6bcff27325d7142d520fa87fb6a88", "text": "India is a vast country depicting wide social, cultural and sexual variations. Indian concept of sexuality has evolved over time and has been immensely influenced by various rulers and religions. Indian sexuality is manifested in our attire, behavior, recreation, literature, sculptures, scriptures, religion and sports. It has influenced the way we perceive our health, disease and device remedies for the same. In modern era, with rapid globalization the unique Indian sexuality is getting diffused. The time has come to rediscover ourselves in terms of sexuality to attain individual freedom and to reinvest our energy to social issues related to sexuality.", "title": "" }, { "docid": "fcca051539729b005271e4f96563538d", "text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.", "title": "" }, { "docid": "d82553a7bf94647aaf60eb36748e567f", "text": "We propose a novel image-based rendering algorithm for handling complex scenes that may include reflective surfaces. Our key contribution lies in treating the problem in the gradient domain. We use a standard technique to estimate scene depth, but assign depths to image gradients rather than pixels. A novel view is obtained by rendering the horizontal and vertical gradients, from which the final result is reconstructed through Poisson integration using an approximate solution as a data term. Our algorithm is able to handle general scenes including reflections and similar effects without explicitly separating the scene into reflective and transmissive parts, as required by previous work. Our prototype renderer is fully implemented on the GPU and runs in real time on commodity hardware.", "title": "" }, { "docid": "9096a4dac61f8a87da4f5cbfca5899a8", "text": "OBJECTIVE\nTo evaluate the CT findings of ruptured corpus luteal cysts.\n\n\nMATERIALS AND METHODS\nSix patients with a surgically proven ruptured corpus luteal cyst were included in this series. The prospective CT findings were retrospectively analyzed in terms of the size and shape of the cyst, the thickness and enhancement pattern of its wall, the attenuation of its contents, and peritoneal fluid.\n\n\nRESULTS\nThe mean diameter of the cysts was 2.8 (range, 1.5-4.8) cm; three were round and three were oval. The mean thickness of the cyst wall was 4.7 (range, 1-10) mm; in all six cases it showed strong enhancement, and in three was discontinuous. In five of six cases, the cystic contents showed high attenuation. Peritoneal fluid was present in all cases, and its attenuation was higher, especially around the uterus and adnexa, than that of urine present in the bladder.\n\n\nCONCLUSION\nIn a woman in whom CT reveals the presence of an ovarian cyst with an enhancing rim and highly attenuated contents, as well as highly attenuated peritoneal fluid, a ruptured corpus luteal cyst should be suspected. Other possible evidence of this is focal interruption of the cyst wall and the presence of peritoneal fluid around the adnexa.", "title": "" }, { "docid": "ae57246e37060c8338ad9894a19f1b6b", "text": "This paper seeks to establish the conceptual and empirical basis for an innovative instrument of corporate knowledge management: the knowledge map. It begins by briefly outlining the rationale for knowledge mapping, i.e., providing a common context to access expertise and experience in large companies. It then conceptualizes five types of knowledge maps that can be used in managing organizational knowledge. They are knowledge-sources, assets, -structures, -applications, and -development maps. In order to illustrate these five types of maps, a series of examples will be presented (from a multimedia agency, a consulting group, a market research firm, and a mediumsized services company) and the advantages and disadvantages of the knowledge mapping technique for knowledge management will be discussed. The paper concludes with a series of quality criteria for knowledge maps and proposes a five step procedure to implement knowledge maps in a corporate intranet.", "title": "" }, { "docid": "b151866647ad5e4cd50279bfdde4984a", "text": "Li-Fi stands for Light-Fidelity. Li-Fi innovation, which was suggested by Harald Haas, a German physicist, gives conduction of information over brightening through distribution of information via a LED light which changes in force quicker when compared to the vision of human beings which could take after. Wi-Fi is extraordinary for overall remote scope inside structures, while Li-Fi has been perfect for high thickness remote information scope in limited range besides for calming wireless impedance concerns. Smart meters are electronic devices which are used for recording consumption of electrical energy on a regular basis at an interval of an hour or less. In this paper, we motivate the need to learn and understand about the various new technologies like LiFi and its advantages. Further, we will understand the comparison between LiFi and Wi-Fi and learn about the advantages of using LiFi over WiFi. In addition to that we will also learn about the working of smart meters and its communication of the recorded information on a daily basis to the utility for monitoring and billing purposes.", "title": "" }, { "docid": "86a622185eeffc4a7ea96c307aed225a", "text": "Copyright © 2014 Massachusetts Medical Society. In light of the rapidly shifting landscape regarding the legalization of marijuana for medical and recreational purposes, patients may be more likely to ask physicians about its potential adverse and beneficial effects on health. The popular notion seems to be that marijuana is a harmless pleasure, access to which should not be regulated or considered illegal. Currently, marijuana is the most commonly used “illicit” drug in the United States, with about 12% of people 12 years of age or older reporting use in the past year and particularly high rates of use among young people.1 The most common route of administration is inhalation. The greenish-gray shredded leaves and flowers of the Cannabis sativa plant are smoked (along with stems and seeds) in cigarettes, cigars, pipes, water pipes, or “blunts” (marijuana rolled in the tobacco-leaf wrapper from a cigar). Hashish is a related product created from the resin of marijuana flowers and is usually smoked (by itself or in a mixture with tobacco) but can be ingested orally. Marijuana can also be used to brew tea, and its oil-based extract can be mixed into food products. The regular use of marijuana during adolescence is of particular concern, since use by this age group is associated with an increased likelihood of deleterious consequences2 (Table 1). Although multiple studies have reported detrimental effects, others have not, and the question of whether marijuana is harmful remains the subject of heated debate. Here we review the current state of the science related to the adverse health effects of the recreational use of marijuana, focusing on those areas for which the evidence is strongest.", "title": "" }, { "docid": "4ddd48db66a5951b82d5b7c2d9b8345a", "text": "In this paper we address the memory demands that come with the processing of 3-dimensional, high-resolution, multi-channeled medical images in deep learning. We exploit memory-efficient backpropagation techniques, to reduce the memory complexity of network training from being linear in the network’s depth, to being roughly constant – permitting us to elongate deep architectures with negligible memory increase. We evaluate our methodology in the paradigm of Image Quality Transfer, whilst noting its potential application to various tasks that use deep learning. We study the impact of depth on accuracy and show that deeper models have more predictive power, which may exploit larger training sets. We obtain substantially better results than the previous state-of-the-art model with a slight memory increase, reducing the rootmean-squared-error by 13%. Our code is publicly available.", "title": "" }, { "docid": "235899b940c658316693d0a481e2d954", "text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.", "title": "" }, { "docid": "389a8e74f6573bd5e71b7c725ec3a4a7", "text": "Paucity of large curated hand-labeled training data forms a major bottleneck in the deployment of machine learning models in computer vision and other fields. Recent work (Data Programming) has shown how distant supervision signals in the form of labeling functions can be used to obtain labels for given data in near-constant time. In this work, we present Adversarial Data Programming (ADP), which presents an adversarial methodology to generate data as well as a curated aggregated label, given a set of weak labeling functions. We validated our method on the MNIST, Fashion MNIST, CIFAR 10 and SVHN datasets, and it outperformed many state-of-the-art models. We conducted extensive experiments to study its usefulness, as well as showed how the proposed ADP framework can be used for transfer learning as well as multi-task learning, where data from two domains are generated simultaneously using the framework along with the label information. Our future work will involve understanding the theoretical implications of this new framework from a game-theoretic perspective, as well as explore the performance of the method on more complex datasets.", "title": "" }, { "docid": "e28f51ea5a09081bd3037a26ca25aebd", "text": "Eye tracking specialists often need to understand and represent aggregate scanning strategies, but methods to identify similar scanpaths and aggregate multiple scanpaths have been elusive. A new method is proposed here to identify scanning strategies by aggregating groups of matching scanpaths automatically. A dataset of scanpaths is first converted to sequences of viewed area names, which are then represented in a dotplot. Matching sequences in the dotplot are found with linear regressions, and then used to cluster the scanpaths hierarchically. Aggregate scanning strategies are generated for each cluster and presented in an interactive dendrogram. While the clustering and aggregation method works in a bottom-up fashion, based on pair-wise matches, a top-down extension is also described, in which a scanning strategy is first input by cursor gesture, then matched against the dataset. The ability to discover both bottom-up and top-down strategy matches provides a powerful tool for scanpath analysis, and for understanding group scanning strategies.", "title": "" }, { "docid": "52faf4868f53008eec1f3ea4f39ed3f0", "text": "Hyaluronic acid (HA) soft-tissue fillers are the most popular degradable injectable products used for correcting skin depressions and restoring facial volume loss. From a rheological perspective, HA fillers are commonly characterised through their viscoelastic properties under shear-stress. However, despite the continuous mechanical pressure that the skin applies on the fillers, compression properties in static and dynamic modes are rarely considered. In this article, three different rheological tests (shear-stress test and compression tests in static and dynamic mode) were carried out on nine CE-marked cross-linked HA fillers. Corresponding shear-stress (G', tanδ) and compression (E', tanδc, normal force FN) parameters were measured. We show here that the tested products behave differently under shear-stress and under compression even though they are used for the same indications. G' showed the expected influence on the tissue volumising capacity, and the same influence was also observed for the compression parameters E'. In conclusion, HA soft-tissue fillers exhibit widely different biophysical characteristics and many variables contribute to their overall performance. The elastic modulus G' is not the only critical parameter to consider amongst the rheological properties: the compression parameters E' and FN also provide key information, which should be taken into account for a better prediction of clinical outcomes, especially for predicting the volumising capacity and probably the ability to stimulate collagen production by fibroblasts.", "title": "" } ]
scidocsrr
02a3b81a7117985ca5b91ab8868070a6
Towards Neural Theorem Proving at Scale Anonymous
[ { "docid": "4381ee2e578a640dda05e609ed7f6d53", "text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.", "title": "" }, { "docid": "98cc792a4fdc23819c877634489d7298", "text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.", "title": "" } ]
[ { "docid": "9a63a5db2a40df78a436e7be87f42ff7", "text": "A quantitative, coordinate-based meta-analysis combined data from 354 participants across 22 fMRI studies and one positron emission tomography (PET) study to identify the differences in neural correlates of figurative and literal language processing, and to investigate the role of the right hemisphere (RH) in figurative language processing. Studies that reported peak activations in standard space contrasting figurative vs. literal language processing at whole brain level in healthy adults were included. The left and right IFG, large parts of the left temporal lobe, the bilateral medial frontal gyri (medFG) and an area around the left amygdala emerged for figurative language processing across studies. Conditions requiring exclusively literal language processing did not activate any selective regions in most of the cases, but if so they activated the cuneus/precuneus, right MFG and the right IPL. No general RH advantage for metaphor processing could be found. On the contrary, significant clusters of activation for metaphor conditions were mostly lateralized to the left hemisphere (LH). Subgroup comparisons between experiments on metaphors, idioms, and irony/sarcasm revealed shared activations in left frontotemporal regions for idiom and metaphor processing. Irony/sarcasm processing was correlated with activations in midline structures such as the medFG, ACC and cuneus/precuneus. To test the graded salience hypothesis (GSH, Giora, 1997), novel metaphors were contrasted against conventional metaphors. In line with the GSH, RH involvement was found for novel metaphors only. Here we show that more analytic, semantic processes are involved in metaphor comprehension, whereas irony/sarcasm comprehension involves theory of mind processes.", "title": "" }, { "docid": "57c705e710f99accab3d9242fddc5ac8", "text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.", "title": "" }, { "docid": "f013f58d995693a79cd986a028faff38", "text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.", "title": "" }, { "docid": "f97d81a177ca629da5fe0d707aec4b8a", "text": "This paper highlights the two machine learning approaches, viz. Rough Sets and Decision Trees (DT), for the prediction of Learning Disabilities (LD) in school-age children, with an emphasis on applications of data mining. Learning disability prediction is a very complicated task. By using these two approaches, we can easily and accurately predict LD in any child and also we can determine the best classification method. In this study, in rough sets the attribute reduction and classification are performed using Johnson’s reduction algorithm and Naive Bayes algorithm respectively for rule mining and in construction of decision trees, J48 algorithm is used. From this study, it is concluded that, the performance of decision trees are considerably poorer in several important aspects compared to rough sets. It is found that, for selection of attributes, rough sets is very useful especially in the case of inconsistent data and it also gives the information about the attribute correlation which is very important in the case of learning disability.", "title": "" }, { "docid": "5d154a62b22415cbedd165002853315b", "text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.", "title": "" }, { "docid": "d6586a261e22e9044425cb27462c3435", "text": "In this work, we develop a planner for high-speed navigation in unknown environments, for example reaching a goal in an unknown building in minimum time, or flying as fast as possible through a forest. This planning task is challenging because the distribution over possible maps, which is needed to estimate the feasibility and cost of trajectories, is unknown and extremely hard to model for real-world environments. At the same time, the worst-case assumptions that a receding-horizon planner might make about the unknown regions of the map may be overly conservative, and may limit performance. Therefore, robots must make accurate predictions about what will happen beyond the map frontiers to navigate as fast as possible. To reason about uncertainty in the map, we model this problem as a POMDP and discuss why it is so difficult given that we have no accurate probability distribution over real-world environments. We then present a novel method of predicting collision probabilities based on training data, which compensates for the missing environment distribution and provides an approximate solution to the POMDP. Extending our previous work, the principal result of this paper is that by using a Bayesian non-parametric learning algorithm that encodes formal safety constraints as a prior over collision probabilities, our planner seamlessly reverts to safe behavior when it encounters a novel environment for which it has no relevant training data. This strategy generalizes our method across all environment types, including those for which we have training data as well as those for which we do not. In familiar environment types with dense training data, we show an 80% speed improvement compared to a planner that is constrained to guarantee safety. In experiments, our planner has reached over 8 m/s in unknown cluttered indoor spaces. Video of our experimental demonstration is available at http://groups.csail.mit.edu/ rrg/bayesian_learning_high_speed_nav.", "title": "" }, { "docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13", "text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.", "title": "" }, { "docid": "5371c5b8e9db3334ed144be4354336cc", "text": "E-learning is related to virtualised distance learning by means of electronic communication mechanisms, using its functionality as a support in the process of teaching-learning. When the learning process becomes computerised, educational data mining employs the information generated from the electronic sources to enrich the learning model for academic purposes. To provide support to e-learning systems, cloud computing is set as a natural platform, as it can be dynamically adapted by presenting a scalable system for the changing necessities of the computer resources over time. It also eases the implementation of data mining techniques to work in a distributed scenario, regarding the large databases generated from e-learning. We give an overview of the current state of the structure of cloud computing, and we provide details of the most common infrastructures that have been developed for such a system. We also present some examples of e-learning approaches for cloud computing, and finally, we discuss the suitability of this environment for educational data mining, suggesting the migration of this approach to this computational scenario.", "title": "" }, { "docid": "768749e22e03aecb29385e39353dd445", "text": "Query logs are of great interest for scientists and companies for research, statistical and commercial purposes. However, the availability of query logs for secondary uses raises privacy issues since they allow the identification and/or revelation of sensitive information about individual users. Hence, query anonymization is crucial to avoid identity disclosure. To enable the publication of privacy-preserved -but still usefulquery logs, in this paper, we present an anonymization method based on semantic microaggregation. Our proposal aims at minimizing the disclosure risk of anonymized query logs while retaining their semantics as much as possible. First, a method to map queries to their formal semantics extracted from the structured categories of the Open Directory Project is presented. Then, a microaggregation method is adapted to perform a semantically-grounded anonymization of query logs. To do so, appropriate semantic similarity and semantic aggregation functions are proposed. Experiments performed using real AOL query logs show that our proposal better retains the utility of anonymized query logs than other related works, while also minimizing the disclosure risk.", "title": "" }, { "docid": "85605e6617a68dff216f242f31306eac", "text": "Steered molecular dynamics (SMD) permits efficient investigations of molecular processes by focusing on selected degrees of freedom. We explain how one can, in the framework of SMD, employ Jarzynski's equality (also known as the nonequilibrium work relation) to calculate potentials of mean force (PMF). We outline the theory that serves this purpose and connects nonequilibrium processes (such as SMD simulations) with equilibrium properties (such as the PMF). We review the derivation of Jarzynski's equality, generalize it to isobaric--isothermal processes, and discuss its implications in relation to the second law of thermodynamics and computer simulations. In the relevant regime of steering by means of stiff springs, we demonstrate that the work on the system is Gaussian-distributed regardless of the speed of the process simulated. In this case, the cumulant expansion of Jarzynski's equality can be safely terminated at second order. We illustrate the PMF calculation method for an exemplary simulation and demonstrate the Gaussian nature of the resulting work distribution.", "title": "" }, { "docid": "d509cb384ecddafa0c4f866882af2c77", "text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.", "title": "" }, { "docid": "d529b4f1992f438bb3ce4373090f8540", "text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.", "title": "" }, { "docid": "aeaee20b184e346cd469204dcf49d815", "text": "Naresh Kumari , Nitin Malik , A. N. Jha , Gaddam Mallesham #*4 # Department of Electrical, Electronics and Communication Engineering, The NorthCap University, Gurgaon, India 1 [email protected] 2 [email protected] * Ex-Professor, Electrical Engineering, Indian Institute of Technology, New Delhi, India 3 [email protected] #* Department of Electrical Engineering, Osmania University, Hyderabad, India 4 [email protected]", "title": "" }, { "docid": "6ebce4adb3693070cac01614078d68fc", "text": "The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a ‘MultiPath’ network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66% overall and by 4× on small objects. It placed second in both the COCO 2015 detection and segmentation challenges.", "title": "" }, { "docid": "28e8bc5b0d1fa9fa46b19c8c821a625c", "text": "This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.", "title": "" }, { "docid": "645f320514b0fa5a8b122c4635bc3df6", "text": "A critical decision problem for top management, and the focus of this study, is whether the CEO (chief executive officer) and CIO (chief information officer) should commit their time to formal planning with the expectation of producing an information technology (IT)-based competitive advantage. Using the perspective of the resource-based view, a model is presented that examines how strategic IT alignment can produce enhanced organizational strategies that yield competitive advantage. One hundred sixty-one CIOs provided data using a postal survey. Results supported seven of the eight hypotheses. They showed that information intensity is an important antecedent to strategic IT alignment, that strategic IT alignment is best explained by multiple constructs which operationalize both process and content measures, and that alignment between the IT plan and the business plan is significantly related to the use of IT for competitive advantage. Study results raise questions about the effect of CEO participation, which appears to be the weak link in the process, and also about the perception of the CIO on the importance of CEO involvement. The paper contributes to our understanding of how knowledge sharing in the alignment process contributes to the creation of superior organizational strategies, provides a framework of the alignment-performance relationship, and furnishes several new constructs. Subject Areas: Competitive Advantage, Information Systems Planning, Knowledge Sharing, Resource-Based View, Strategic Planning, and Structural Equation Modeling.", "title": "" }, { "docid": "a85511bfaa47701350f4d97ec94453fd", "text": "We propose a novel expression transfer method based on an analysis of the frequency of multi-expression facial images. We locate the facial features automatically and describe the shape deformations between a neutral expression and non-neutral expressions. The subtle expression changes are important visual clues to distinguish different expressions. These changes are more salient in the frequency domain than in the image domain. We extract the subtle local expression deformations for the source subject, coded in the wavelet decomposition. This information about expressions is transferred to a target subject. The resulting synthesized image preserves both the facial appearance of the target subject and the expression details of the source subject. This method is extended to dynamic expression transfer to allow a more precise interpretation of facial expressions. Experiments on Japanese Female Facial Expression (JAFFE), the extended Cohn-Kanade (CK+) and PIE facial expression databases show the superiority of our method over the state-of-the-art method.", "title": "" }, { "docid": "bb0dce17b5810ebd7173ea35545c3bf6", "text": "Five studies demonstrated that highly guilt-prone people may avoid forming interdependent partnerships with others whom they perceive to be more competent than themselves, as benefitting a partner less than the partner benefits one's self could trigger feelings of guilt. Highly guilt-prone people who lacked expertise in a domain were less willing than were those low in guilt proneness who lacked expertise in that domain to create outcome-interdependent relationships with people who possessed domain-specific expertise. These highly guilt-prone people were more likely than others both to opt to be paid on their performance alone (Studies 1, 3, 4, and 5) and to opt to be paid on the basis of the average of their performance and that of others whose competence was more similar to their own (Studies 2 and 5). Guilt proneness did not predict people's willingness to form outcome-interdependent relationships with potential partners who lacked domain-specific expertise (Studies 4 and 5). It also did not predict people's willingness to form relationships when poor individual performance would not negatively affect partner outcomes (Study 4). Guilt proneness therefore predicts whether, and with whom, people develop interdependent relationships. The findings also demonstrate that highly guilt-prone people sacrifice financial gain out of concern about how their actions would influence others' welfare. As such, the findings demonstrate a novel way in which guilt proneness limits free-riding and therefore reduces the incidence of potentially unethical behavior. Lastly, the findings demonstrate that people who lack competence may not always seek out competence in others when choosing partners.", "title": "" }, { "docid": "a9a8baf6dfb2526d75b0d7e49bb9b138", "text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.", "title": "" }, { "docid": "890236dc21eef6d0523ee1f5e91bf784", "text": "Perhaps the most amazing property of these word embeddings is that somehow these vector encodings effectively capture the semantic meanings of the words. The question one might ask is how or why? The answer is that because the vectors adhere surprisingly well to our intuition. For instance, words that we know to be synonyms tend to have similar vectors in terms of cosine similarity and antonyms tend to have dissimilar vectors. Even more surprisingly, word vectors tend to obey the laws of analogy. For example, consider the analogy ”Woman is to queen as man is to king”. It turns out that", "title": "" } ]
scidocsrr
43fda67994521863cf18d5b59f1c239d
Re-ranking Person Re-identification with k-Reciprocal Encoding
[ { "docid": "2bc30693be1c5855a9410fb453128054", "text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.", "title": "" } ]
[ { "docid": "141c28bfbeb5e71dc68d20b6220794c3", "text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.", "title": "" }, { "docid": "14ba02b92184c21cbbe2344313e09c23", "text": "Smart meters are at high risk to be an attack target or to be used as an attacking means of malicious users because they are placed at the closest location to users in the smart gridbased infrastructure. At present, Korea is proceeding with 'Smart Grid Advanced Metering Infrastructure (AMI) Construction Project', and has selected Device Language Message Specification/ COmpanion Specification for Energy Metering (DLMS/COSEM) protocol for the smart meter communication. However, the current situation is that the vulnerability analysis technique is still insufficient to be applied to DLMS/COSEM-based smart meters. Therefore, we propose a new fuzzing architecture for analyzing vulnerabilities which is applicable to actual DLMS/COSEM-based smart meter devices. In addition, this paper presents significant case studies for verifying proposed fuzzing architecture through conducting the vulnerability analysis of the experimental results from real DLMS/COSEM-based smart meter devices used in Korea SmartGrid Testbed.", "title": "" }, { "docid": "dc8d9a7da61aab907ee9def56dfbd795", "text": "The ability to detect change-points in a dynamic network or a time series of graphs is an increasingly important task in many applications of the emerging discipline of graph signal processing. This paper formulates change-point detection as a hypothesis testing problem in terms of a generative latent position model, focusing on the special case of the Stochastic Block Model time series. We analyze two classes of scan statistics, based on distinct underlying locality statistics presented in the literature. Our main contribution is the derivation of the limiting properties and power characteristics of the competing scan statistics. Performance is compared theoretically, on synthetic data, and empirically, on the Enron email corpus.", "title": "" }, { "docid": "446af0ad077943a77ac4a38fd84df900", "text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.", "title": "" }, { "docid": "41aa05455471ecd660599f4ec285ff29", "text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.", "title": "" }, { "docid": "c215a497d39f4f95a9fc720debb14b05", "text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).", "title": "" }, { "docid": "d8d102c3d6ac7d937bb864c69b4d3cd9", "text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.", "title": "" }, { "docid": "c3a6a72c9d738656f356d67cd5ce6c47", "text": "Most doors are controlled by persons with the use of keys, security cards, password or pattern to open the door. Theaim of this paper is to help users forimprovement of the door security of sensitive locations by using face detection and recognition. Face is a complex multidimensional structure and needs good computing techniques for detection and recognition. This paper is comprised mainly of three subsystems: namely face detection, face recognition and automatic door access control. Face detection is the process of detecting the region of face in an image. The face is detected by using the viola jones method and face recognition is implemented by using the Principal Component Analysis (PCA). Face Recognition based on PCA is generally referred to as the use of Eigenfaces.If a face is recognized, it is known, else it is unknown. The door will open automatically for the known person due to the command of the microcontroller. On the other hand, alarm will ring for the unknown person. Since PCA reduces the dimensions of face images without losing important features, facial images for many persons can be stored in the database. Although many training images are used, computational efficiency cannot be decreased significantly. Therefore, face recognition using PCA can be more useful for door security system than other face recognition schemes.", "title": "" }, { "docid": "78ce06926ea3b2012277755f0916fbb7", "text": "We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because \"those who cannot remember the past are condemned to repeat it.\" This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.", "title": "" }, { "docid": "d8e60dc8378fe39f698eede2b6687a0f", "text": "Today's complex software systems are neither secure nor reliable. The rudimentary software protection primitives provided by current hardware forces systems to run many distrusting software components (e.g., procedures, libraries, plugins, modules) in the same protection domain, or otherwise suffer degraded performance from address space switches.\n We present CODOMs (COde-centric memory DOMains), a novel architecture that can provide finer-grained isolation between software components with effectively zero run-time overhead, all at a fraction of the complexity of other approaches. An implementation of CODOMs in a cycle-accurate full-system x86 simulator demonstrates that with the right hardware support, finer-grained protection and run-time performance can peacefully coexist.", "title": "" }, { "docid": "dd211105651b376b40205eb16efe1c25", "text": "WBAN based medical-health technologies have great potential for continuous monitoring in ambulatory settings, early detection of abnormal conditions, and supervised rehabilitation. They can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time.", "title": "" }, { "docid": "8b7cb051224008ba3e1bf91bac5e9d21", "text": "The Internet of things aspires to connect anyone with anything at any point of time at any place. Internet of Thing is generally made up of three-layer architecture. Namely Perception, Network and Application layers. A lot of security principles should be enabled at each layer for proper and efficient working of these applications. This paper represents the overview of Security principles, Security Threats and Security challenges at the application layer and its countermeasures to overcome those challenges. The Application layer plays an important role in all of the Internet of Thing applications. The most widely used application layer protocol is MQTT. The security threats for Application Layer Protocol MQTT is particularly selected and evaluated. Comparison is done between different Application layer protocols and security measures for those protocols. Due to the lack of common standards for IoT protocols, a lot of issues are considered while choosing the particular protocol.", "title": "" }, { "docid": "79465d290ab299b9d75e9fa617d30513", "text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.", "title": "" }, { "docid": "b27b164a7ff43b8f360167e5f886f18a", "text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.", "title": "" }, { "docid": "4cc71db87682a96ddee09e49a861142f", "text": "BACKGROUND\nReadiness is an integral and preliminary step in the successful implementation of telehealth services into existing health systems within rural communities.\n\n\nMETHODS AND MATERIALS\nThis paper details and critiques published international peer-reviewed studies that have focused on assessing telehealth readiness for rural and remote health. Background specific to readiness and change theories is provided, followed by a critique of identified telehealth readiness models, including a commentary on their readiness assessment tools.\n\n\nRESULTS\nFour current readiness models resulted from the search process. The four models varied across settings, such as rural outpatient practices, hospice programs, rural communities, as well as government agencies, national associations, and organizations. All models provided frameworks for readiness tools. Two specifically provided a mechanism by which communities could be categorized by their level of telehealth readiness.\n\n\nDISCUSSION\nCommon themes across models included: an appreciation of practice context, strong leadership, and a perceived need to improve practice. Broad dissemination of these telehealth readiness models and tools is necessary to promote awareness and assessment of readiness. This will significantly aid organizations to facilitate the implementation of telehealth.", "title": "" }, { "docid": "44fee78f33e4d5c6d9c8b0126b1d5830", "text": "This paper discusses an industrial case study in which data mining has been applied to solve a quality engineering problem in electronics assembly. During the assembly process, solder balls occur underneath some components of printed circuit boards. The goal is to identify the cause of solder defects in a circuit board using a data mining approach. Statistical process control and design of experiment approaches did not provide conclusive results. The paper discusses features considered in the study, data collected, and the data mining solution approach to identify causes of quality faults in an industrial application.", "title": "" }, { "docid": "9ba6a2042e99c3ace91f0fc017fa3fdd", "text": "This paper proposes a two-element multi-input multi-output (MIMO) open-slot antenna implemented on the display ground plane of a laptop computer for eight-band long-term evolution/wireless wide-area network operations. The metal surroundings of the antennas have been well integrated as a part of the radiation structure. In the single-element open-slot antenna, the nearby hinge slot (which is bounded by two ground planes and two hinges) is relatively large as compared with the open slot itself and acts as a good radiator. In the MIMO antenna consisting of two open-slot elements, a T slot is embedded in the display ground plane and is connected to the hinge slot. The T and hinge slots when connected behave as a radiator; whereas, the T slot itself functions as an isolation element. With the isolation element, simulated isolations between the two elements of the MIMO antenna are raised from 8.3–11.2 to 15–17.1 dB in 698–960 MHz and from 12.1–21 to 15.9–26.7 dB in 1710–2690 MHz. Measured isolations with the isolation element in the desired low- and high-frequency ranges are 17.6–18.8 and 15.2–23.5 dB, respectively. Measured and simulated efficiencies for the two-element MIMO antenna with either element excited are both larger than 50% in the desired operating frequency bands.", "title": "" }, { "docid": "ad3add7522b3a58359d36e624e9e65f7", "text": "In this paper, global and local prosodic features extracted from sentence, word and syllables are proposed for speech emotion or affect recognition. In this work, duration, pitch, and energy values are used to represent the prosodic information, for recognizing the emotions from speech. Global prosodic features represent the gross statistics such as mean, minimum, maximum, standard deviation, and slope of the prosodic contours. Local prosodic features represent the temporal dynamics in the prosody. In this work, global and local prosodic features are analyzed separately and in combination at different levels for the recognition of emotions. In this study, we have also explored the words and syllables at different positions (initial, middle, and final) separately, to analyze their contribution towards the recognition of emotions. In this paper, all the studies are carried out using simulated Telugu emotion speech corpus (IITKGP-SESC). These results are compared with the results of internationally known Berlin emotion speech corpus (Emo-DB). Support vector machines are used to develop the emotion recognition models. The results indicate that, the recognition performance using local prosodic features is better compared to the performance of global prosodic features. Words in the final position of the sentences, syllables in the final position of the words exhibit more emotion discriminative information compared to the words and syllables present in the other positions. K.S. Rao ( ) · S.G. Koolagudi · R.R. Vempada School of Information Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India e-mail: [email protected] S.G. Koolagudi e-mail: [email protected] R.R. Vempada e-mail: [email protected]", "title": "" }, { "docid": "33ed6ab1eef74e6ba6649ff5a85ded6b", "text": "With the rapid increasing of smart phones and their embedded sensing technologies, mobile crowd sensing (MCS) becomes an emerging sensing paradigm for performing large-scale sensing tasks. One of the key challenges of large-scale mobile crowd sensing systems is how to effectively select the minimum set of participants from the huge user pool to perform the tasks and achieve certain level of coverage. In this paper, we introduce a new MCS architecture which leverages the cached sensing data to fulfill partial sensing tasks in order to reduce the size of selected participant set. We present a newly designed participant selection algorithm with caching and evaluate it via extensive simulations with a real-world mobile dataset.", "title": "" }, { "docid": "f13ffbb31eedcf46df1aaecfbdf61be9", "text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.", "title": "" } ]
scidocsrr
acba07b0f0738c55be978ceeccf1a993
Emotion Recognition Based on Joint Visual and Audio Cues
[ { "docid": "8877d6753d6b7cd39ba36c074ca56b00", "text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "title": "" }, { "docid": "d9ffb9e4bba1205892351b1328977f6c", "text": "Bayesian network models provide an attractive framework for multimodal sensor fusion. They combine an intuitive graphical representation with efficient algorithms for inference and learning. However, the unsupervised nature of standard parameter learning algorithms for Bayesian networks can lead to poor performance in classification tasks. We have developed a supervised learning framework for Bayesian networks, which is based on the Adaboost algorithm of Schapire and Freund. Our framework covers static and dynamic Bayesian networks with both discrete and continuous states. We have tested our framework in the context of a novel multimodal HCI application: a speech-based command and control interface for a Smart Kiosk. We provide experimental evidence for the utility of our boosted learning approach.", "title": "" }, { "docid": "c8e321ac8b32643ac9cbe151bb9e5f8f", "text": "The most expressive way humans display emotions is through facial expressions. In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input. We introduce and test different Bayesian network classifiers for classifying expressions from video, focusing on changes in distribution assumptions, and feature dependency structures. In particular we use Naive–Bayes classifiers and change the distribution from Gaussian to Cauchy, and use Gaussian Tree-Augmented Naive Bayes (TAN) classifiers to learn the dependencies among different facial motion features. We also introduce a facial expression recognition from live video input using temporal cues. We exploit the existing methods and propose a new architecture of hidden Markov models (HMMs) for automatically segmenting and recognizing human facial expression from video sequences. The architecture performs both segmentation and recognition of the facial expressions automatically using a multi-level architecture composed of an HMM layer and a Markov model layer. We explore both person-dependent and person-independent recognition of expressions and compare the different methods. 2003 Elsevier Inc. All rights reserved. * Corresponding author. E-mail addresses: [email protected] (I. Cohen), [email protected] (N. Sebe), ashutosh@ us.ibm.com (A. Garg), [email protected] (L. Chen), [email protected] (T.S. Huang). 1077-3142/$ see front matter 2003 Elsevier Inc. All rights reserved. doi:10.1016/S1077-3142(03)00081-X I. Cohen et al. / Computer Vision and Image Understanding 91 (2003) 160–187 161", "title": "" } ]
[ { "docid": "e0ee4f306bb7539d408f606d3c036ac5", "text": "Despite the growing popularity of mobile web browsing, the energy consumed by a phone browser while surfing the web is poorly understood. We present an infrastructure for measuring the precise energy used by a mobile browser to render web pages. We then measure the energy needed to render financial, e-commerce, email, blogging, news and social networking sites. Our tools are sufficiently precise to measure the energy needed to render individual web elements, such as cascade style sheets (CSS), Javascript, images, and plug-in objects. Our results show that for popular sites, downloading and parsing cascade style sheets and Javascript consumes a significant fraction of the total energy needed to render the page. Using the data we collected we make concrete recommendations on how to design web pages so as to minimize the energy needed to render the page. As an example, by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience. We conclude by estimating the point at which offloading browser computations to a remote proxy can save energy on the phone.", "title": "" }, { "docid": "10994a99bb4da87a34d835720d005668", "text": "Wireless sensor networks (WSNs), consisting of a large number of nodes to detect ambient environment, are widely deployed in a predefined area to provide more sophisticated sensing, communication, and processing capabilities, especially concerning the maintenance when hundreds or thousands of nodes are required to be deployed over wide areas at the same time. Radio frequency identification (RFID) technology, by reading the low-cost passive tags installed on objects or people, has been widely adopted in the tracing and tracking industry and can support an accurate positioning within a limited distance. Joint utilization of WSN and RFID technologies is attracting increasing attention within the Internet of Things (IoT) community, due to the potential of providing pervasive context-aware applications with advantages from both fields. WSN-RFID convergence is considered especially promising in context-aware systems with indoor positioning capabilities, where data from deployed WSN and RFID systems can be opportunistically exploited to refine and enhance the collected data with position information. In this papera, we design and evaluate a hybrid system which combines WSN and RFID technologies to provide an indoor positioning service with the capability of feeding position information into a general-purpose IoT environment. Performance of the proposed system is evaluated by means of simulations and a small-scale experimental set-up. The performed analysis demonstrates that the joint use of heterogeneous technologies can increase the robustness and the accuracy of the indoor positioning systems.", "title": "" }, { "docid": "1c6bf44a2fea9e9b1ffc015759f8986f", "text": "Convolutional neural networks (CNNs) typically suffer from slow convergence rates in training, which limits their wider application. This paper presents a new CNN learning approach, based on second-order methods, aimed at improving: a) Convergence rates of existing gradient-based methods, and b) Robustness to the choice of learning hyper-parameters (e.g., learning rate). We derive an efficient back-propagation algorithm for simultaneously computing both gradients and second derivatives of the CNN's learning objective. These are then input to a Long Short Term Memory (LSTM) to predict optimal updates of CNN parameters in each learning iteration. Both meta-learning of the LSTM and learning of the CNN are conducted jointly. Evaluation on image classification demonstrates that our second-order backpropagation has faster convergences rates than standard gradient-based learning for the same CNN, and that it converges to better optima leading to better performance under a budgeted time for learning. We also show that an LSTM learned to learn a small CNN network can be readily used for learning a larger network.", "title": "" }, { "docid": "564045d00d2e347252fda301a332f30a", "text": "In this contribution, the control of a reverse osmosis desalination plant by using an optimal multi-loop approach is presented. Controllers are assumed to be players of a cooperative game, whose solution is obtained by multi-objective optimization (MOO). The MOO problem is solved by applying a genetic algorithm and the final solution is found from this Pareto set. For the reverse osmosis plant a control scheme consisting of two PI control loops are proposed. Simulation results show that in some cases, as for example this desalination plant, multi-loop control with several controllers, which have been obtained by join multi-objective optimization, perform as good as more complex controllers but with less implementation effort.", "title": "" }, { "docid": "848e56ec20ccab212567087178e36979", "text": "The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people’s travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.", "title": "" }, { "docid": "e8e658d677a3b1a23650b25edd32fc84", "text": "The aim of the study is to facilitate the suture on the sacral promontory for laparoscopic sacrocolpopexy. We hypothesised that a new method of sacral anchorage using a biosynthetic material, the polyether ether ketone (PEEK) harpoon, might be adequate because of its tensile strength, might reduce complications owing to its well-known biocompatibility, and might shorten the duration of surgery. We verified the feasibility of insertion and quantified the stress resistance of the harpoons placed in the promontory in nine fresh cadavers, using four stress tests in each case. Mean values were analysed and compared using the Wilcoxon and Fisher’s exact tests. The harpoon resists for at least 30 s against a pulling force of 1 N, 5 N and 10 N. Maximum tensile strength is 21 N for the harpoon and 32 N for the suture. Harpoons broke in 6 % and threads in 22 % of cases. Harpoons detached owing to ligament rupture in 64 % of the cases. Regarding failures of the whole complex, the failure involves the harpoon in 92 % of cases and the thread in 56 %. The four possible placements of the harpoon in the promontory were equally safe in terms of resistance to traction. The PEEK harpoon can be easily anchored in the promontory. Thread is more resistant to traction than the harpoon, but the latter makes the surgical technique easier. Any of the four locations tested is feasible for anchoring the device.", "title": "" }, { "docid": "4d383a53c180d5dc4473ab9d7795639a", "text": "With pervasive applications of medical imaging in health-care, biomedical image segmentation plays a central role in quantitative analysis, clinical diagnosis, and medical intervention. Since manual annotation suffers limited reproducibility, arduous efforts, and excessive time, automatic segmentation is desired to process increasingly larger scale histopathological data. Recently, deep neural networks (DNNs), particularly fully convolutional networks (FCNs), have been widely applied to biomedical image segmentation, attaining much improved performance. At the same time, quantization of DNNs has become an active research topic, which aims to represent weights with less memory (precision) to considerably reduce memory and computation requirements of DNNs while maintaining acceptable accuracy. In this paper, we apply quantization techniques to FCNs for accurate biomedical image segmentation. Unlike existing literatures on quantization which primarily targets memory and computation complexity reduction, we apply quantization as a method to reduce overfitting in FCNs for better accuracy. Specifically, we focus on a state-of-the-art segmentation framework, suggestive annotation [26], which judiciously extracts representative annotation samples from the original training dataset, obtaining an effective small-sized balanced training dataset. We develop two new quantization processes for this framework: (1) suggestive annotation with quantization for highly representative training samples, and (2) network training with quantization for high accuracy. Extensive experiments on the MICCAI Gland dataset show that both quantization processes can improve the segmentation performance, and our proposed method exceeds the current state-of-the-art performance by up to 1%. In addition, our method has a reduction of up to 6.4x on memory usage.", "title": "" }, { "docid": "71b31941082d639dfc6178ff74fba487", "text": "This paper describes ETH Zurich’s submission to the TREC 2016 Clinical Decision Support (CDS) track. In three successive stages, we apply query expansion based on literal as well as semantic term matches, rank documents in a negation-aware manner and, finally, re-rank them based on clinical intent types as well as semantic and conceptual affinity to the medical case in question. Empirical results show that the proposed method can distill patient representations from raw clinical notes that result in a retrieval performance superior to that of manually constructed case descriptions.", "title": "" }, { "docid": "3be0bd7f02c941f32903f6ad2379f45b", "text": "Spinal cord injury induces the disruption of blood-spinal cord barrier and triggers a complex array of tissue responses, including endoplasmic reticulum (ER) stress and autophagy. However, the roles of ER stress and autophagy in blood-spinal cord barrier disruption have not been discussed in acute spinal cord trauma. In the present study, we respectively detected the roles of ER stress and autophagy in blood-spinal cord barrier disruption after spinal cord injury. Besides, we also detected the cross-talking between autophagy and ER stress both in vivo and in vitro. ER stress inhibitor, 4-phenylbutyric acid, and autophagy inhibitor, chloroquine, were respectively or combinedly administrated in the model of acute spinal cord injury rats. At day 1 after spinal cord injury, blood-spinal cord barrier was disrupted and activation of ER stress and autophagy were involved in the rat model of trauma. Inhibition of ER stress by treating with 4-phenylbutyric acid decreased blood-spinal cord barrier permeability, prevented the loss of tight junction (TJ) proteins and reduced autophagy activation after spinal cord injury. On the contrary, inhibition of autophagy by treating with chloroquine exacerbated blood-spinal cord barrier permeability, promoted the loss of TJ proteins and enhanced ER stress after spinal cord injury. When 4-phenylbutyric acid and chloroquine were combinedly administrated in spinal cord injury rats, chloroquine abolished the blood-spinal cord barrier protective effect of 4-phenylbutyric acid by exacerbating ER stress after spinal cord injury, indicating that the cross-talking between autophagy and ER stress may play a central role on blood-spinal cord barrier integrity in acute spinal cord injury. The present study illustrates that ER stress induced by spinal cord injury plays a detrimental role on blood-spinal cord barrier integrity, on the contrary, autophagy induced by spinal cord injury plays a furthersome role in blood-spinal cord barrier integrity in acute spinal cord injury.", "title": "" }, { "docid": "c27ba892408391234da524ffab0e7418", "text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.", "title": "" }, { "docid": "be3640467394a0e0b5a5035749b442e9", "text": "Data pre-processing is an important and critical step in the data mining process and it has a huge impact on the success of a data mining project.[1](3) Data pre-processing is a step of the Knowledge discovery in databases (KDD) process that reduces the complexity of the data and offers better conditions to subsequent analysis. Through this the nature of the data is better understood and the data analysis is performed more accurately and efficiently. Data pre-processing is challenging as it involves extensive manual effort and time in developing the data operation scripts. There are a number of different tools and methods used for pre-processing, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access; and feature extraction, which pulls out specified data that is significant in some particular context. Pre-processing technique is also useful for association rules algo. LikeAprior, Partitioned, Princer-search algo. and many more algos.", "title": "" }, { "docid": "566913d3a3d2e8fe24d6f5ff78440b94", "text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.", "title": "" }, { "docid": "3aa36b86391a2596ea1fe1fe75470362", "text": "Experimental and computational studies of the hovering performance of microcoaxial shrouded rotors were carried out. The ATI Mini Multi-Axis Force/Torque Transducer system was used to measure all six components of the force and moment. Meanwhile, numerical simulation of flow field around rotor was carried out using sliding mesh method and multiple reference frame technique by ANASYS FLUENT. The computational results were well agreed with experimental data. Several important factors, such as blade pitch angle, rotor spacing and tip clearance, which influence the performance of shrouded coaxial rotor are studied in detail using CFD method in this paper. Results shows that, evaluated in terms of Figure of Merit, open coaxial rotor is suited for smaller pitch angle condition while shrouded coaxial rotor is suited for larger pitch angle condition. The negative pressure region around the shroud lip is the main source of the thrust generation. In order to have a better performance for shrouded coaxial rotor, the tip clearance must be smaller. The thrust sharing of upper- and lower-rotor is also discussed in this paper.", "title": "" }, { "docid": "785bd7171800d3f2f59f90838a84dc37", "text": "BACKGROUND\nCancer is considered to develop due to disruptions in the tissue microenvironment in addition to genetic disruptions in the tumor cells themselves. The two most important microenvironmental disruptions in cancer are arguably tissue hypoxia and disrupted circadian rhythmicity. Endothelial cells, which line the luminal side of all blood vessels transport oxygen or endocrine circadian regulators to the tissue and are therefore of key importance for circadian disruption and hypoxia in tumors.\n\n\nSCOPE OF REVIEW\nHere I review recent findings on the role of circadian rhythms and hypoxia in cancer and metastasis, with particular emphasis on how these pathways link tumor metastasis to pathological functions of blood vessels. The involvement of disrupted cell metabolism and redox homeostasis in this context and the use of novel zebrafish models for such studies will be discussed.\n\n\nMAJOR CONCLUSIONS\nCircadian rhythms and hypoxia are involved in tumor metastasis on all levels from pathological deregulation of the cell to the tissue and the whole organism. Pathological tumor blood vessels cause hypoxia and disruption in circadian rhythmicity which in turn drives tumor metastasis. Zebrafish models may be used to increase our understanding of the mechanisms behind hypoxia and circadian regulation of metastasis.\n\n\nGENERAL SIGNIFICANCE\nDisrupted blood flow in tumors is currently seen as a therapeutic goal in cancer treatment, but may drive invasion and metastasis via pathological hypoxia and circadian clock signaling. Understanding the molecular details behind such regulation is important to optimize treatment for patients with solid tumors in the future. This article is part of a Special Issue entitled Redox regulation of differentiation and de-differentiation.", "title": "" }, { "docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8", "text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.", "title": "" }, { "docid": "fada1434ec6e060eee9a2431688f82f3", "text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.", "title": "" }, { "docid": "3ca76a840ac35d94677fa45c767e61f1", "text": "A three dimensional (3-D) imaging system is implemented by employing 2-D range migration algorithm (RMA) for frequency modulated continuous wave synthetic aperture radar (FMCW-SAR). The backscattered data of a 1-D synthetic aperture at specific altitudes are coherently integrated to form 2-D images. These 2-D images at different altitudes are stitched vertically to form a 3-D image. Numerical simulation for near-field scenario are also presented to validate the proposed algorithm.", "title": "" }, { "docid": "e82681b5140f3a9b283bbd02870f18d5", "text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization", "title": "" }, { "docid": "ba573c3dd5206e7f71be11d030060484", "text": "The availability of camera phones provides people with a mobile platform for decoding bar codes, whereas conventional scanners lack mobility. However, using a normal camera phone in such applications is challenging due to the out-of-focus problem. In this paper, we present the research effort on the bar code reading algorithms using a VGA camera phone, NOKIA 7650. EAN-13, a widely used 1D bar code standard, is taken as an example to show the efficiency of the method. A wavelet-based bar code region location and knowledge-based bar code segmentation scheme is applied to extract bar code characters from poor-quality images. All the segmented bar code characters are input to the recognition engine, and based on the recognition distance, the bar code character string with the smallest total distance is output as the final recognition result of the bar code. In order to train an efficient recognition engine, the modified Generalized Learning Vector Quantization (GLVQ) method is designed for optimizing a feature extraction matrix and the class reference vectors. 19 584 samples segmented from more than 1000 bar code images captured by NOKIA 7650 are involved in the training process. Testing on 292 bar code images taken by the same phone, the correct recognition rate of the entire bar code set reaches 85.62%. We are confident that auto focus or macro modes on camera phones will bring the presented method into real world mobile use.", "title": "" } ]
scidocsrr
c205d05981a16dc9ba2c9e74a009d8db
Neural Cryptanalysis of Classical Ciphers
[ { "docid": "ff10bbde3ed18eea73375540135f99f4", "text": "Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms – the mappings from plaintext to ciphertext – for three polyalphabetic ciphers (Vigenere, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at ’cracking’ the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigenere and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.", "title": "" }, { "docid": "f8f1e4f03c6416e9d9500472f5e00dbe", "text": "Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.", "title": "" } ]
[ { "docid": "2679d251d413adf208cb8b764ce55468", "text": "We compare variations of string comparators based on the Jaro-Winkler comparator and edit distance comparator. We apply the comparators to Census data to see which are better classifiers for matches and nonmatches, first by comparing their classification abilities using a ROC curve based analysis, then by considering a direct comparison between two candidate comparators in record linkage results.", "title": "" }, { "docid": "e0ec22fcdc92abe141aeb3fa67e9e55a", "text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack", "title": "" }, { "docid": "1ee1adcfd73e9685eab4e2abd28183c7", "text": "We describe an algorithm for generating spherical mosaics from a collection of images acquired from a common optical center. The algorithm takes as input an arbitrary number of partially overlapping images, an adjacency map relating the images, initial estimates of the rotations relating each image to a specified base image, and approximate internal calibration information for the camera. The algorithm's output is a rotation relating each image to the base image, and revised estimates of the camera's internal parameters. Our algorithm is novel in the following respects. First, it requires no user input. (Our image capture instrumentation provides both an adjacency map for the mosaic, and an initial rotation estimate for each image.) Second, it optimizes an objective function based on a global correlation of overlapping image regions. Third, our representation of rotations significantly increases the accuracy of the optimization. Finally, our representation and use of adjacency information guarantees globally consistent rotation estimates. The algorithm has proved effective on a collection of nearly four thousand images acquired from more than eighty distinct optical centers. The experimental results demonstrate that the described global optimization strategy is superior to non-global aggregation of pair-wise correlation terms, and that it successfully generates high-quality mosaics despite significant error in initial rotation estimates.", "title": "" }, { "docid": "1e31afb6d28b0489e67bb63d4dd60204", "text": "An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.", "title": "" }, { "docid": "a112a01246256e38b563f616baf02cef", "text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: [email protected] 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125", "title": "" }, { "docid": "429c6591223007b40ef7bffc5d9ac4db", "text": "A compact dual-polarized double E-shaped patch antenna with high isolation for pico base station applications is presented in this communication. The proposed antenna employs a stacked configuration composed of two layers of substrate. Two modified E-shaped patches are printed orthogonally on both sides of the upper substrate. Two probes are used to excite the E-shaped patches, and each probe is connected to one patch separately. A circular patch is printed on the lower substrate to broaden the impedance bandwidth. Both simulated and measured results show that the proposed antenna has a port isolation higher than 30 dB over the frequency band of 2.5 GHz - 2.7 GHz, while the return loss is less than - 15 dB within the band. Moreover, stable radiation pattern with a peak gain of 6.8 dBi - 7.4 dBi is obtained within the band.", "title": "" }, { "docid": "7adf46bb0a4ba677e58aee9968d06293", "text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.", "title": "" }, { "docid": "97f748ee5667ee8c2230e07881574c22", "text": "The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.", "title": "" }, { "docid": "f9468884fd24ff36b81fc2016a519634", "text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.", "title": "" }, { "docid": "101af3fab1f8abb4e2b75a067031048a", "text": "Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust. (Trust; Organizing Principle; Structuring; Mobilizing) Introduction In the introduction to this special issue we observed that empirical research on trust was not keeping pace with theoretical developments in the field. We viewed this as a significant limitation and surmised that a special issue devoted to empirical research on trust would serve as a valuable vehicle for advancing the literature. In addition to the lack of empirical research, we would also make the observation that theories and evidence accumulating on trust in organizations is not well integrated and that the literature as a whole lacks coherence. At a general level, extant research provides “accumulating evidence that trust has a number of important benefits for organizations and their members” (Kramer 1999, p. 569). More specifically, Dirks and Ferrin’s (2001) review of the literature points to two distinct means through which trust generates these benefits. The dominant approach emphasizes the direct effects that trust has on important organizational phenomena such as: communication, conflict management, negotiation processes, satisfaction, and performance (both individual and unit). A second, less well studied, perspective points to the enabling effects of trust, whereby trust creates or enhances the conditions, such as positive interpretations of another’s behavior, that are conducive to obtaining organizational outcomes like cooperation and higher performance. The identification of these two perspectives provides a useful way of organizing the literature and generating insight into the mechanisms through which trust influences organizational outcomes. However, we are still left with a set of findings that have yet to be integrated on a theoretical level in a way that yields a set of generalizable propositions about the effects of trust on organizing. We believe this is due to the fact that research has, for the most part, embedded trust into existing theories. As a result, trust has been studied in a variety of different ways to address a wide range of organizational questions. This has yielded a diverse and eclectic body of knowledge about the relationship between trust and various organizational outcomes. At the same time, this approach has resulted in a somewhat fragmented view of the role of trust in an organizational context as a whole. In the remainder of this paper we begin to address the challenge of integrating the fragmented trust literature. While it is not feasible to develop a comprehensive framework that synthesizes the vast and diverse trust literature in a single paper, we draw together several key strands that relate to the organizational context. In particular, our paper aims to advance the literature by connecting the psychological and sociological microfoundations of trust with the macro-bases of organizing. BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle 92 ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 Specifically, we propose that reconceptualizing trust as an organizing principle is a fruitful way of viewing the role of trust and comprehending how research on trust advances our understanding of the organization and coordination of economic activity. While it is our goal to generate a framework that coalesces our thinking about the processes through which trust, as an organizing principle, affects organizational life, we are not Pollyannish: trust indubitably has a down side, which has been little researched. We begin by elaborating on the notion of an organizing principle and then move on to conceptualize trust from this perspective. Next, we describe a set of generalizable causal pathways through which trust affects organizing. We then use that framework to identify some exemplars of possible research questions and to point to possible downsides of trust. Organizing Principles As Ouchi (1980) discusses, a fundamental purpose of organizations is to attain goals that require coordinated efforts. Interdependence and uncertainty make goal attainment more difficult and create the need for organizational solutions. The subdivision of work implies that actors must exchange information and rely on others to accomplish organizational goals without having complete control over, or being able to fully monitor, others’ behaviors. Coordinating actions is further complicated by the fact that actors cannot assume that their interests and goals are perfectly aligned. Consequently, relying on others is difficult when there is uncertainty about their intentions, motives, and competencies. Managing interdependence among individuals, units, and activities in the face of behavioral uncertainty constitutes a key organizational challenge. Organizing principles represent a way of solving the problem of interdependence and uncertainty. An organizing principle is the logic by which work is coordinated and information is gathered, disseminated, and processed within and between organizations (Zander and Kogut 1995). An organizing principle represents a heuristic for how actors interpret and represent information and how they select appropriate behaviors and routines for coordinating actions. Examples of organizing principles include: market, hierarchy, and clan (Ouchi 1980). Other have referred to these organizing principles as authority, price, and norms (Adler 2001, Bradach and Eccles 1989, Powell 1990). Each of these principles operates on the basis of distinct mechanisms that orient, enable, and constrain economic behavior. For instance, authority as an organizing principle solves the problem of coordinating action in the face of interdependence and uncertainty by reallocating decision-making rights (Simon 1957, Coleman 1990). Price-based organizing principles revolve around the idea of making coordination advantageous for each party involved by aligning incentives (Hayek 1948, Alchian and Demsetz 1972). Compliance to internalized norms and the resulting self-control of the clan form is another organizing principle that has been identified as a means of achieving coordinated action (Ouchi 1980). We propose that trust is also an organizing principle and that conceptualizing trust in this way provides a powerful means of integrating the disparate research on trust and distilling generalizable implications for how trust affects organizing. We view trust as most closely related to the clan organizing principle. By definition clans rely on trust (Ouchi 1980). However, trust can and does occur in organizational contexts outside of clans. For instance, there are a variety of organizational arrangements where cooperation in mixed-motive situations depends on trust, such as in repeated strategic alliances (Gulati 1995), buyer-supplier relationships (Dyer and Chu this issue), and temporary groups in organizations (Meyerson et al. 1996). More generally, we believe that trust frequently operates in conjunction with other organizing principles. For instance, Dirks (2000) found that while authority is important for behaviors that can be observed or controlled, trust is important when there exists performance ambiguity or behaviors that cannot be observed or controlled. Because most organizations have a combination of behaviors that can and cannot be observed or controlled, authority and trust co-occur. More generally, we believe that mixed or plural forms are the norm, consistent with Bradach and Eccles (1989). In some situations, however, trust may be the primary organizing principle, such as when monitoring and formal controls are difficult and costly to use. In these cases, trust represents an efficient choice. In other situations, trust may be relied upon due to social, rather than efficiency, considerations. For instance, achieving a sense of personal belonging within a collectivity (Podolny and Barron 1997) and the desire to develop and maintain rewarding social attachments (Granovetter 1985) may serve as the impetus for relying on trust as an organizing principle. Trust as an Organizing Principle At a general level trust is the willingness to accept vulnerability based on positive expectations about another’s intentions or behaviors (Mayer et al. 1995, Rousseau et al. 1998). Because trust represents a positive assumption BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 93 about the motives and intentions of another party, it allows people to economize on information processing and safeguarding behaviors. By representing an expectation that others will act in a way that serves, or at least is not inimical to, one’s interests (Gambetta 1988), trust as a heuristic is a frame of reference that al", "title": "" }, { "docid": "13897df01d4c03191dd015a04c3a5394", "text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author", "title": "" }, { "docid": "07570935aad8a481ea5e9d422c4f80ca", "text": "Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters) in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis), streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.", "title": "" }, { "docid": "b4f82364c5c4900058f50325ccc9e4c4", "text": "OBJECTIVE\nThis study reports the psychometric properties of the 24-item version of the Diabetes Knowledge Questionnaire (DKQ).\n\n\nRESEARCH DESIGN AND METHODS\nThe original 60-item DKQ was administered to 502 adult Mexican-Americans with type 2 diabetes who are part of the Starr County Diabetes Education Study. The sample was composed of 252 participants and 250 support partners. The subjects were randomly assigned to the educational and social support intervention (n = 250) or to the wait-listed control group (n = 252). A shortened 24-item version of the DKQ was derived from the original instrument after data collection was completed. Reliability was assessed by means of Cronbach's coefficient alpha. To determine validity, differentiation between the experimental and control groups was conducted at baseline and after the educational portion of the intervention.\n\n\nRESULTS\nThe 24-item version of the DKQ (DKQ-24) attained a reliability coefficient of 0.78, indicating internal consistency, and showed sensitivity to the intervention, suggesting construct validation.\n\n\nCONCLUSIONS\nThe DKQ-24 is a reliable and valid measure of diabetes-related knowledge that is relatively easy to administer to either English or Spanish speakers.", "title": "" }, { "docid": "8b2b8eb2d16b28dac8ec8d4572b8db0e", "text": "Combining meaning, memory, and development, the perennially popular topic of intuition can be approached in a new way. Fuzzy-trace theory integrates these topics by distinguishing between meaning-based gist representations, which support fuzzy (yet advanced) intuition, and superficial verbatim representations of information, which support precise analysis. Here, I review the counterintuitive findings that led to the development of the theory and its most recent extensions to the neuroscience of risky decision making. These findings include memory interference (worse verbatim memory is associated with better reasoning); nonnumerical framing (framing effects increase when numbers are deleted from decision problems); developmental decreases in gray matter and increases in brain connectivity; developmental reversals in memory, judgment, and decision making (heuristics and biases based on gist increase from childhood to adulthood, challenging conceptions of rationality); and selective attention effects that provide critical tests comparing fuzzy-trace theory, expected utility theory, and its variants (e.g., prospect theory). Surprising implications for judgment and decision making in real life are also discussed, notably, that adaptive decision making relies mainly on gist-based intuition in law, medicine, and public health.", "title": "" }, { "docid": "fb58d6fe77092be4bce5dd0926c563de", "text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.", "title": "" }, { "docid": "6c221c4085c6868640c236b4dd72f777", "text": "Resilience has been most frequently defined as positive adaptation despite adversity. Over the past 40 years, resilience research has gone through several stages. From an initial focus on the invulnerable or invincible child, psychologists began to recognize that much of what seems to promote resilience originates outside of the individual. This led to a search for resilience factors at the individual, family, community - and, most recently, cultural - levels. In addition to the effects that community and culture have on resilience in individuals, there is growing interest in resilience as a feature of entire communities and cultural groups. Contemporary researchers have found that resilience factors vary in different risk contexts and this has contributed to the notion that resilience is a process. In order to characterize the resilience process in a particular context, it is necessary to identify and measure the risk involved and, in this regard, perceived discrimination and historical trauma are part of the context in many Aboriginal communities. Researchers also seek to understand how particular protective factors interact with risk factors and with other protective factors to support relative resistance. For this purpose they have developed resilience models of three main types: \"compensatory,\" \"protective,\" and \"challenge\" models. Two additional concepts are resilient reintegration, in which a confrontation with adversity leads individuals to a new level of growth, and the notion endorsed by some Aboriginal educators that resilience is an innate quality that needs only to be properly awakened.The review suggests five areas for future research with an emphasis on youth: 1) studies to improve understanding of what makes some Aboriginal youth respond positively to risk and adversity and others not; 2) case studies providing empirical confirmation of the theory of resilient reintegration among Aboriginal youth; 3) more comparative studies on the role of culture as a resource for resilience; 4) studies to improve understanding of how Aboriginal youth, especially urban youth, who do not live in self-governed communities with strong cultural continuity can be helped to become, or remain, resilient; and 5) greater involvement of Aboriginal researchers who can bring a nonlinear world view to resilience research.", "title": "" }, { "docid": "4c4bfcadd71890ccce9e58d88091f6b3", "text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games", "title": "" }, { "docid": "da61b8bd6c1951b109399629f47dad16", "text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.", "title": "" }, { "docid": "48b88774957a6d30ae9d0a97b9643647", "text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features", "title": "" }, { "docid": "80a4de6098a4821e52ccc760db2aae18", "text": "This article presents P-Sense, a participatory sensing application for air pollution monitoring and control. The paper describes in detail the system architecture and individual components of a successfully implemented application. In addition, the paper points out several other research-oriented problems that need to be addressed before these applications can be effectively implemented in practice, in a large-scale deployment. Security, privacy, data visualization and validation, and incentives are part of our work-in-progress activities", "title": "" } ]
scidocsrr
419e64d3afee302db4f7fabe52be4e3b
Offline signature verification using classifier combination of HOG and LBP features
[ { "docid": "7489989ecaa16bc699949608f9ffc8a1", "text": "A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "7e1f0cd43cdc9685474e19b7fd65791b", "text": "Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.", "title": "" }, { "docid": "dc2770a8318dd4aa1142efebe5547039", "text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.", "title": "" }, { "docid": "f2707d7fcd5d8d9200d4cc8de8ff1042", "text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.", "title": "" }, { "docid": "f9876540ce148d7b27bab53839f1bf19", "text": "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.", "title": "" }, { "docid": "eb6572344dbaf8e209388f888fba1c10", "text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.", "title": "" }, { "docid": "c39836282acc36e77c95e732f4f1c1bc", "text": "In this paper, a new dataset, HazeRD, is proposed for benchmarking dehazing algorithms under more realistic haze conditions. HazeRD contains fifteen real outdoor scenes, for each of which five different weather conditions are simulated. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. All images are of high resolution, typically six to eight megapixels. We test the performance of several state-of-the-art dehazing techniques on HazeRD. The results exhibit a significant difference among algorithms across the different datasets, reiterating the need for more realistic datasets such as ours and for more careful benchmarking of the methods.", "title": "" }, { "docid": "49680e94843e070a5ed0179798f66f33", "text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.", "title": "" }, { "docid": "c9c44cc22c71d580f4b2a24cd91ac274", "text": "One of the first steps in the utterance interpretation pipeline of many task-oriented conversational AI systems is to identify user intents and the corresponding slots. Neural sequence labeling models have achieved very high accuracy on these tasks when trained on large amounts of training data. However, collecting this data is very time-consuming and therefore it is unfeasible to collect large amounts of data for many languages. For this reason, it is desirable to make use of existing data in a high-resource language to train models in low-resource languages. In this paper, we investigate the performance of three different methods for cross-lingual transfer learning, namely (1) translating the training data, (2) using cross-lingual pre-trained embeddings, and (3) a novel method of using a multilingual machine translation encoder as contextual word representations. We find that given several hundred training examples in the the target language, the latter two methods outperform translating the training data. Further, in very low-resource settings, we find that multilingual contextual word representations give better results than using crosslingual static embeddings. We release a dataset of around 57k annotated utterances in English (43k), Spanish (8.6k) and Thai (5k) for three task oriented domains at https://fb.me/multilingual_task_oriented_data.", "title": "" }, { "docid": "1969bf5a07349cc5a9b498e0437e41fe", "text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.", "title": "" }, { "docid": "49d6b3f314b61ace11afc5eea7b652e3", "text": "Euler diagrams visually represent containment, intersection and exclusion using closed curves. They first appeared several hundred years ago, however, there has been a resurgence in Euler diagram research in the twenty-first century. This was initially driven by their use in visual languages, where they can be used to represent logical expressions diagrammatically. This work lead to the requirement to automatically generate Euler diagrams from an abstract description. The ability to generate diagrams has accelerated their use in information visualization, both in the standard case where multiple grouping of data items inside curves is required and in the area-proportional case where the area of curve intersections is important. As a result, examining the usability of Euler diagrams has become an important aspect of this research. Usability has been investigated by empirical studies, but much research has concentrated on wellformedness, which concerns how curves and other features of the diagram interrelate. This work has revealed the drawability of Euler diagrams under various wellformedness properties and has developed embedding methods that meet these properties. Euler diagram research surveyed in this paper includes theoretical results, generation techniques, transformation methods and the development of automated reasoning systems for Euler diagrams. It also overviews application areas and the ways in which Euler diagrams have been extended.", "title": "" }, { "docid": "db1cdc2a4e3fe26146a1f9c8b0926f9e", "text": "Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.", "title": "" }, { "docid": "681641e2593cad85fb1633d1027a9a4f", "text": "Overview Aggressive driving is a major concern of the American public, ranking at or near the top of traffic safety issues in national surveys of motorists. However, the concept of aggressive driving is not well defined, and its overall impact on traffic safety has not been well quantified due to inadequacies and limitation of available data. This paper reviews published scientific literature on aggressive driving; discusses various definitions of aggressive driving; cites several specific behaviors that are typically associated with aggressive driving; and summarizes past research on the individuals or groups most likely to behave aggressively. Since adequate data to precisely quantify the percentage of fatal crashes that involve aggressive driving do not exist, in this review, we have quantified the number of fatal crashes in which one or more driver actions typically associated with aggressive driving were reported. We found these actions were reported in 56 percent of fatal crashes from 2003 through 2007, with excessive speed being the number one factor. Ideally, an estimate of the prevalence of aggressive driving would include only instances in which such actions were performed intentionally; however, available data on motor vehicle crashes do not contain such information, thus it is important to recognize that this 56 percent may to some degree overestimate the contribution of aggressive driving to fatal crashes. On the other hand, it is likely that aggressive driving contributes to at least some crashes in which it is not reported due to lack of evidence. Despite the clear limitations associated with our attempt to estimate the contribution of potentially-aggressive driver actions to fatal crashes, it is clear that aggressive driving poses a serious traffic safety threat. In addition, our review further indicated that the \" Do as I say, not as I do \" culture, previously reported in the Foundation's Traffic Safety Culture Index, very much applies to aggressive driving.", "title": "" }, { "docid": "237437eae6a6154fb3b32c4c6c1fed07", "text": "Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. 2006 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "462afb864b255f94deefb661174a598b", "text": "Due to the heterogeneous and resource-constrained characters of Internet of Things (IoT), how to guarantee ubiquitous network connectivity is challenging. Although LTE cellular technology is the most promising solution to provide network connectivity in IoTs, information diffusion by cellular network not only occupies its saturating bandwidth, but also costs additional fees. Recently, NarrowBand-IoT (NB-IoT), introduced by 3GPP, is designed for low-power massive devices, which intends to refarm wireless spectrum and increase network coverage. For the sake of providing high link connectivity and capacity, we stimulate effective cooperations among user equipments (UEs), and propose a social-aware group formation framework to allocate resource blocks (RBs) effectively following an in-band NB-IoT solution. Specifically, we first introduce a social-aware multihop device-to-device (D2D) communication scheme to upload information toward the eNodeB within an LTE, so that a logical cooperative D2D topology can be established. Then, we formulate the D2D group formation as a scheduling optimization problem for RB allocation, which selects the feasible partition for the UEs by jointly considering relay method selection and spectrum reuse for NB-IoTs. Since the formulated optimization problem has a high computational complexity, we design a novel heuristic with a comprehensive consideration of power control and relay selection. Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.", "title": "" }, { "docid": "3440de9ea0f76ba39949edcb5e2a9b54", "text": "This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis­ tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com­ panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi­ cient to analyze all types of crime. ■ Current mapping technologies have sig­ nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective­ ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood).", "title": "" }, { "docid": "e4e97569f53ddde763f4f28559c96ba6", "text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.", "title": "" }, { "docid": "5f4e761af11ace5a4d6819431893a605", "text": "The high power density converter is required due to the strict demands of volume and weight in more electric aircraft, which makes SiC extremely attractive for this application. In this work, a prototype of 50 kW SiC high power density converter with the topology of two-level three-phase voltage source inverter is demonstrated. This converter is driven at high switching speed based on the optimization in switching characterization. It operates at a switching frequency up to 100 kHz and a low dead time of 250 ns. And the converter efficiency is measured to be 99% at 40 kHz and 97.8% at 100 kHz.", "title": "" }, { "docid": "6cf4315ecce8a06d9354ca2f2684113c", "text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.", "title": "" }, { "docid": "09168164e47fd781e4abeca45fb76c35", "text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].", "title": "" } ]
scidocsrr
c857af66e1ebadea18b3b07de5b0400a
A Parallel Method for Earth Mover's Distance
[ { "docid": "872a79a47e6a4d83e7440ea5e7126dee", "text": "We propose simple and extremely efficient methods for solving the Basis Pursuit problem min{‖u‖1 : Au = f, u ∈ R}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μ‖u‖1 + 1 2 ‖Au− f‖2, for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps, and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A> can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is solely based on such operations for solving the above unconstrained sub-problem, we were able to solve huge instances of compressed sensing problems quickly on a standard PC.", "title": "" } ]
[ { "docid": "ed530d8481bbfd81da4bdf5d611ad4a4", "text": "Traumatic coma was produced in 45 monkeys by accelerating the head without impact in one of three directions. The duration of coma, degree of neurological impairment, and amount of diffuse axonal injury (DAI) in the brain were directly related to the amount of coronal head motion used. Coma of less than 15 minutes (concussion) occurred in 11 of 13 animals subjected to sagittal head motion, in 2 of 6 animals with oblique head motion, and in 2 of 26 animals with full lateral head motion. All 15 concussioned animals had good recovery, and none had DAI. Conversely, coma lasting more than 6 hours occurred in one of the sagittal or oblique injury groups but was present in 20 of the laterally injured animals, all of which were severely disabled afterward. All laterally injured animals had a degree of DAI similar to that found in severe human head injury. Coma lasting 16 minutes to 6 hours occurred in 2 of 13 of the sagittal group, 4 of 6 in the oblique group, and 4 of 26 in the lateral group, these animals had less neurological disability and less DAI than when coma lasted longer than 6 hours. These experimental findings duplicate the spectrum of traumatic coma seen in human beings and include axonal damage identical to that seen in sever head injury in humans. Since the amount of DAI was directly proportional to the severity of injury (duration of coma and quality of outcome), we conclude that axonal damage produced by coronal head acceleration is a major cause of prolonged traumatic coma and its sequelae.", "title": "" }, { "docid": "84af7a01dc5486c800f1cf94832ac5a8", "text": "A technique intended to increase the diversity order of bit-interleaved coded modulations (BICM) over non Gaussian channels is presented. It introduces simple modifications to the mapper and to the corresponding demapper. They consist of a constellation rotation coupled with signal space component interleaving. Iterative processing at the receiver side can provide additional improvement to the BICM performance. This method has been shown to perform well over fading channels with or without erasures. It has been adopted for the 4-, 16-, 64- and 256-QAM constellations considered in the DVB-T2 standard. Resulting gains can vary from 0.2 dB to several dBs depending on the order of the constellation, the coding rate and the channel model.", "title": "" }, { "docid": "9d45323cd4550075d4c2569065ae583c", "text": "Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.", "title": "" }, { "docid": "17ba29c670e744d6e4f9e93ceb109410", "text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.", "title": "" }, { "docid": "e96c9bdd3f5e9710f7264cbbe02738a7", "text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.", "title": "" }, { "docid": "640f9ca0bec934786b49f7217e65780b", "text": "Social Networking has become today’s lifestyle and anyone can easily receive information about everyone in the world. It is very useful if a personal identity can be obtained from the mobile device and also connected to social networking. Therefore, we proposed a face recognition system on mobile devices by combining cloud computing services. Our system is designed in the form of an application developed on Android mobile devices which utilized the Face.com API as an image data processor for cloud computing services. We also applied the Augmented Reality as an information viewer to the users. The result of testing shows that the system is able to recognize face samples with the average percentage of 85% with the total computation time for the face recognition system reached 7.45 seconds, and the average augmented reality translation time is 1.03 seconds to get someone’s information.", "title": "" }, { "docid": "934bdd758626ec37241cffba8e2cbeb9", "text": "The combination of GPS/INS provides an ideal navigation system of full capability of continuously outputting position, velocity, and attitude of the host platform. However, the accuracy of INS degrades with time when GPS signals are blocked in environments such as tunnels, dense urban canyons and indoors. To dampen down the error growth, the INS sensor errors should be properly estimated and compensated before the inertial data are involved in the navigation computation. Therefore appropriate modelling of the INS sensor errors is a necessity. Allan Variance (AV) is a simple and efficient method for verifying and modelling these errors by representing the root mean square (RMS) random drift error as a function of averaging time. The AV can be used to determine the characteristics of different random processes. This paper applies the AV to analyse and model different types of random errors residing in the measurements of MEMS inertial sensors. The derived error model will be further applied to a low-cost GPS/MEMS-INS system once the correctness of the model is verified. The paper gives the detail of the AV analysis as well as presents the test results.", "title": "" }, { "docid": "f670bd1ad43f256d5f02039ab200e1e8", "text": "This article addresses the performance of distributed database systems. Specifically, we present an algorithm for dynamic replication of an object in distributed systems. The algorithm is adaptive in the sence that it changes the replication scheme of the object i.e., the set of processors at which the object inreplicated) as changes occur in the read-write patern of the object (i.e., the number of reads and writes issued by each processor). The algorithm continuously moves the replication scheme towards an optimal one. We show that the algorithm can be combined with the concurrency control and recovery mechanisms of ta distributed database management system. The performance of the algorithm is analyzed theoretically and experimentally. On the way we provide a lower bound on the performance of any dynamic replication algorith.", "title": "" }, { "docid": "45b90a55678a022f6c3f128d0dc7d1bf", "text": "Finding community structures in online social networks is an important methodology for understanding the internal organization of users and actions. Most previous studies have focused on structural properties to detect communities. They do not analyze the information gathered from the posting activities of members of social networks, nor do they consider overlapping communities. To tackle these two drawbacks, a new overlapping community detection method involving social activities and semantic analysis is proposed. This work applies a fuzzy membership to detect overlapping communities with different extent and run semantic analysis to include information contained in posts. The available resource description format contributes to research in social networks. Based on this new understanding of social networks, this approach can be adopted for large online social networks and for social portals, such as forums, that are not based on network topology. The efficiency and feasibility of this method is verified by the available experimental analysis. The results obtained by the tests on real networks indicate that the proposed approach can be effective in discovering labelled and overlapping communities with a high amount of modularity. This approach is fast enough to process very large and dense social networks. 6", "title": "" }, { "docid": "b7521521277f944a9532dc4435a2bda7", "text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "e7686824a9449bf793554fcf78b66c0e", "text": "In this paper, tension propagation analysis of a newly designed multi-DOF robotic platform for single-port access surgery (SPS) is presented. The analysis is based on instantaneous kinematics of the proposed 6-DOF surgical instrument, and provides the decision criteria for estimating the payload of a surgical instrument according to its pose changes and specifications of a driving-wire. Also, the wire-tension and the number of reduction ratio to manage such a payload can be estimated, quantitatively. The analysis begins with derivation of the power transmission efficiency through wire-interfaces from each instrument joint to an actuator. Based on the energy conservation law and the capstan equation, we modeled the degradation of power transmission efficiency due to 1) the reducer called wire-reduction mechanism, 2) bending of proximal instrument joints, and 3) bending of hyper-redundant guide tube. Based on the analysis, the tension of driving-wires was computed according to various manipulation poses and loading conditions. In our experiment, a newly designed surgical instrument successfully managed the external load of 1kgf, which was applied to the end effector of a surgical manipulator.", "title": "" }, { "docid": "c78ebe9d42163142379557068b652a9c", "text": "A tumor is a mass of tissue that's formed by an accumulation of abnormal cells. Normally, the cells in your body age, die, and are replaced by new cells. With cancer and other tumors, something disrupts this cycle. Tumor cells grow, even though the body does not need them, and unlike normal old cells, they don't die. As this process goes on, the tumor continues to grow as more and more cells are added to the mass. Image processing is an active research area in which medical image processing is a highly challenging field. Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. In this project, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. There are dissimilar types of algorithm were developed for brain tumor detection. Comparing to the other algorithms the performance of fuzzy c-means plays a major role. The patient's stage is determined by this process, whether it can be cured with medicine or not. Also we study difficulty to detect Mild traumatic brain injury (mTBI) the current tools are qualitative, which can lead to poor diagnosis and treatment and to overcome these difficulties, an algorithm is proposed that takes advantage of subject information and texture information from MR images. A contextual model is developed to simulate the progression of the disease using multiple inputs, such as the time post injury and the location of injury. Textural features are used along with feature selection for a single MR modality.", "title": "" }, { "docid": "9530749d15f1f3493f920b84e6e8cebd", "text": "The view that humans comprise only two types of beings, women and men, a framework that is sometimes referred to as the \"gender binary,\" played a profound role in shaping the history of psychological science. In recent years, serious challenges to the gender binary have arisen from both academic research and social activism. This review describes 5 sets of empirical findings, spanning multiple disciplines, that fundamentally undermine the gender binary. These sources of evidence include neuroscience findings that refute sexual dimorphism of the human brain; behavioral neuroendocrinology findings that challenge the notion of genetically fixed, nonoverlapping, sexually dimorphic hormonal systems; psychological findings that highlight the similarities between men and women; psychological research on transgender and nonbinary individuals' identities and experiences; and developmental research suggesting that the tendency to view gender/sex as a meaningful, binary category is culturally determined and malleable. Costs associated with reliance on the gender binary and recommendations for future research, as well as clinical practice, are outlined. (PsycINFO Database Record", "title": "" }, { "docid": "8c679f94e31dc89787ccff8e79e624b5", "text": "This paper presents a radar sensor package specifically developed for wide-coverage sounding and imaging of polar ice sheets from a variety of aircraft. Our instruments address the need for a reliable remote sensing solution well-suited for extensive surveys at low and high altitudes and capable of making measurements with fine spatial and temporal resolution. The sensor package that we are presenting consists of four primary instruments and ancillary systems with all the associated antennas integrated into the aircraft to maintain aerodynamic performance. The instruments operate simultaneously over different frequency bands within the 160 MHz-18 GHz range. The sensor package has allowed us to sound the most challenging areas of the polar ice sheets, ice sheet margins, and outlet glaciers; to map near-surface internal layers with fine resolution; and to detect the snow-air and snow-ice interfaces of snow cover over sea ice to generate estimates of snow thickness. In this paper, we provide a succinct description of each radar and associated antenna structures and present sample results to document their performance. We also give a brief overview of our field measurement programs and demonstrate the unique capability of the sensor package to perform multifrequency coincidental measurements from a single airborne platform. Finally, we illustrate the relevance of using multispectral radar data as a tool to characterize the entire ice column and to reveal important subglacial features.", "title": "" }, { "docid": "99cb4f69fb7b6ff16c9bffacd7a42f4d", "text": "Single cell segmentation is critical and challenging in live cell imaging data analysis. Traditional image processing methods and tools require time-consuming and labor-intensive efforts of manually fine-tuning parameters. Slight variations of image setting may lead to poor segmentation results. Recent development of deep convolutional neural networks(CNN) provides a potentially efficient, general and robust method for segmentation. Most existing CNN-based methods treat segmentation as a pixel-wise classification problem. However, three unique problems of cell images adversely affect segmentation accuracy: lack of established training dataset, few pixels on cell boundaries, and ubiquitous blurry features. The problem becomes especially severe with densely packed cells, where a pixel-wise classification method tends to identify two neighboring cells with blurry shared boundary as one cell, leading to poor cell count accuracy and affecting subsequent analysis. Here we developed a different learning strategy that combines strengths of CNN and watershed algorithm. The method first trains a CNN to learn Euclidean distance transform of binary masks corresponding to the input images. Then another CNN is trained to detect individual cells in the Euclidean distance transform. In the third step, the watershed algorithm takes the outputs from the previous steps as inputs and performs the segmentation. We tested the combined method and various forms of the pixel-wise classification algorithm on segmenting fluorescence and transmitted light images. The new method achieves similar pixel accuracy but significant higher cell count accuracy than pixel-wise classification methods do, and the advantage is most obvious when applying on noisy images of densely packed cells.", "title": "" }, { "docid": "ef9650746ac9ab803b2a3bbdd5493fee", "text": "This paper addresses the problem of establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data.", "title": "" }, { "docid": "ab572c22a75656c19e50b311eb4985ec", "text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.", "title": "" }, { "docid": "1de46f2eee8db2fad444faa6fbba4d1c", "text": "Hyunsook Yoon Dongguk University, Korea This paper reports on a qualitative study that investigated the changes in students’ writing process associated with corpus use over an extended period of time. The primary purpose of this study was to examine how corpus technology affects students’ development of competence as second language (L2) writers. The research was mainly based on case studies with six L2 writers in an English for Academic Purposes writing course. The findings revealed that corpus use not only had an immediate effect by helping the students solve immediate writing/language problems, but also promoted their perceptions of lexicogrammar and language awareness. Once the corpus approach was introduced to the writing process, the students assumed more responsibility for their writing and became more independent writers, and their confidence in writing increased. This study identified a wide variety of individual experiences and learning contexts that were involved in deciding the levels of the students’ willingness and success in using corpora. This paper also discusses the distinctive contributions of general corpora to English for Academic Purposes and the importance of lexical and grammatical aspects in L2 writing pedagogy.", "title": "" }, { "docid": "cb2f5ac9292df37860b02313293d2f04", "text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a", "title": "" } ]
scidocsrr
5e7c2be0d66e726a1d4bd7d249df0187
Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy.
[ { "docid": "32b5458ced294a01654f3747273db08d", "text": "Prior studies of childhood aggression have demonstrated that, as a group, boys are more aggressive than girls. We hypothesized that this finding reflects a lack of research on forms of aggression that are relevant to young females rather than an actual gender difference in levels of overall aggressiveness. In the present study, a form of aggression hypothesized to be typical of girls, relational aggression, was assessed with a peer nomination instrument for a sample of 491 third-through sixth-grade children. Overt aggression (i.e., physical and verbal aggression as assessed in past research) and social-psychological adjustment were also assessed. Results provide evidence for the validity and distinctiveness of relational aggression. Further, they indicated that, as predicted, girls were significantly more relationally aggressive than were boys. Results also indicated that relationally aggressive children may be at risk for serious adjustment difficulties (e.g., they were significantly more rejected and reported significantly higher levels of loneliness, depression, and isolation relative to their nonrelationally aggressive peers).", "title": "" } ]
[ { "docid": "d364aaa161cc92e28697988012c35c2a", "text": "Many people believe that information that is stored in long-term memory is permanent, citing examples of \"retrieval techniques\" that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures, methods for eliciting spontaneous and other conscious recoveries, and—perhaps most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates. In this article we first evaluate • the evidence and conclude that, contrary to apparent popular belief, the evidence in no way confirms the view that all memories are permanent and thus potentially recoverable. We then describe some failures that resulted from attempts to elicit retrieval of previously stored information and conjecture what circumstances might cause information stored in memory to be irrevocably destroyed. Few would deny the existence of a phenomenon called \"forgetting,\" which is evident in the common observation that information becomes less available as the interval increases between the time of the information's initial acquisition and the time of its attempted retrieval. Despite the prevalence of the phenomenon, the factors that underlie forgetting have proved to be rather elusive, and the literature abounds with hypothesized mechanisms to account for the observed data. In this article we shall focus our attention on what is perhaps the fundamental issue concerning forgetting; Does forgetting consist of an actual loss of stored information, or does it result from a loss of access to information, which, once stored, remains forever? It should be noted at the outset that this question may be impossible to resolve in an absolute sense. Consider the following thought experiment. A person (call him Geoffrey) observes some event, say a traffic accident. During the period of observation, a movie camera strapped to Geoffrey's head records the event as Geoffrey experiences it. Some time later, Geoffrey attempts to recall and Vol. 35, No. S, 409-420 describe the event with the aid of some retrieval technique (e.g., hypnosis or brain stimulation), which is alleged to allow recovery of any information stored in his brain. While Geoffrey describes the event, a second person (Elizabeth) watches the movie that has been made of the event. Suppose, now, that Elizabeth is unable to decide whether Geoffrey is describing his memory or the movie—in other words, memory and movie are indistinguishable. Such a finding would constitute rather impressive support for the position held by many people that the mind registers an accurate representation of reality and that this information is stored permanently. But suppose, on the other hand, that Geoffrey's report—even with the aid of the miraculous retrieval technique—is incomplete, sketchy, and inaccurate, and furthermore, suppose that the accuracy of his report deteriorates over time. Such a finding, though consistent with the view that forgetting consists of information loss, would still be inconclusive, because it could be argued that the retrieval technique—no matter what it was— was simply not good enough to disgorge the information, which remained buried somewhere in the recesses of Geoffrey's brain. Thus, the question of information loss versus This article was written while E. Loftus was a fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, California, and G. Loftus was a visiting scholar in the Department of Psychology at Stanford University. James Fries generously picked apart an earlier version of this article. Paul Baltes translated the writings of Johann Nicolas Tetens (177?). The following financial sources are gratefully acknowledged: (a) National Science Foundation (NSF) Grant BNS 76-2337 to G. Loftus; (b) 'NSF Grant ENS 7726856 to E. Loftus; and (c) NSF Grant BNS 76-22943 and an Andrew Mellon Foundation grant to the Center for Advanced Study in the Behavioral Sciences. Requests for reprints should be sent to Elizabeth Loftus, Department of Psychology, University of Washington, Seattle, Washington 98195. AMERICAN PSYCHOLOGIST • MAY 1980 * 409 Copyright 1980 by the American Psychological Association, Inc. 0003-066X/80/3505-0409$00.75 retrieval failure may be unanswerable in principle. Nonetheless it often becomes necessary to choose sides. In the scientific arena, for example, a theorist constructing a model of memory may— depending on the details of the model'—be forced to adopt one position or the other. In fact, several leading theorists have suggested that although loss from short-term memory does occur, once material is registered in long-term memory, the information is never lost from the system, although it may normally be inaccessible (Shiffrin & Atkinson, 1969; Tulving, 1974). The idea is not new, however. Two hundred years earlier, the German philosopher Johann Nicolas Tetens (1777) wrote: \"Each idea does not only leave a trace or a consequent of that trace somewhere in the body, but each of them can be stimulated—-even if it is not possible to demonstrate this in a given situation\" (p, 7S1). He was explicit about his belief that certain ideas may seem to be forgotten, but that actually they are only enveloped by other ideas and, in truth, are \"always with us\" (p, 733). Apart from theoretical interest, the position one takes on the permanence of memory traces has important practical consequences. It therefore makes sense to air the issue from time to time, which is what we shall do here, The purpose of this paper is threefold. We shall first report some data bearing on people's beliefs about the question of information loss versus retrieval failure. To anticipate our findings, our survey revealed that a substantial number of the individuals queried take the position that stored information is permanent'—-or in other words, that all forgetting results from retrieval failure. In support of their answers, people typically cited data from some variant of the thought experiment described above, that is, they described currently available retrieval techniques that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures (e.g., free association), and— most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates (Penfield, 1969; Penfield & Perot, 1963; Penfield & Roberts, 1959). The results of our survey lead to the second purpose of this paper, which is to evaluate this evidence. Finally, we shall describe some interesting failures that have resulted from attempts to elicit retrieval of previously stored information. These failures lend support to the contrary view that some memories are apparently modifiable, and that consequently they are probably unrecoverable. Beliefs About Memory In an informal survey, 169 individuals from various parts of the U.S. were asked to give their views about how memory works. Of these, 75 had formal graduate training in psychology, while the remaining 94 did not. The nonpsychologists had varied occupations. For example, lawyers, secretaries, taxicab drivers, physicians, philosophers, fire investigators, and even an 11-year-old child participated. They were given this question: Which of these statements best reflects your view on how human memory works? 1. Everything we learn is permanently stored in the mind, although sometimes particular details are not accessible. With hypnosis, or other special techniques, these inaccessible details could eventually be recovered. 2. Some details that we learn may be permanently lost from memory. Such details would never be» able to be recovered by hypnosis, or any other special technique, because these details are simply no longer there. Please elaborate briefly or give any reasons you may have for your view. We found that 84% of the psychologists chose Position 1, that is, they indicated a belief that all information in long-term memory is there, even though much of it cannot be retrieved; 14% chose Position 2, and 2% gave some other answer. A somewhat smaller percentage, 69%, of the nonpsychologists indicated a belief in Position 1; 23% chose Position 2, while 8% did not make a clear choice. What reasons did people give for their belief? The most common reason for choosing Position 1 was based on personal experience and involved the occasional recovery of an idea that the person had not thought about for quite some time. For example, one person wrote: \"I've experienced and heard too many descriptions of spontaneous recoveries of ostensibly quite trivial memories, which seem to have been triggered by just the right set of a person's experiences.\" A second reason for a belief in Position 1, commonly given by persons trained in psychology, was knowledge of the work of Wilder Penfield. One psychologist wrote: \"Even though Statement 1 is untestable, I think that evidence, weak though it is, such as Penfield's work, strongly suggests it may be correct.\" Occasionally respondents offered a comment about 410 • MAY 1980 • AMERICAN PSYCHOLOGIST hypnosis, and more rarely about psychoanalysis and repression, sodium pentothal, or even reincarnation, to support their belief in the permanence of memory. Admittedly, the survey was informally conducted, the respondents were not selected randomly, and the question itself may have pressured people to take sides when their true belief may have been a position in between. Nevertheless, the results suggest a widespread belief in the permanence of memories and give us some idea of the reasons people offer in support of this belief.", "title": "" }, { "docid": "702df543119d648be859233bfa2b5d03", "text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "ca807d3bed994a8e7492898e6bfe6dd2", "text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.", "title": "" }, { "docid": "1bf43801d05551f376464d08893b211c", "text": "A Large number of digital text information is generated every day. Effectively searching, managing and exploring the text data has become a main task. In this paper, we first represent an introduction to text mining and a probabilistic topic model Latent Dirichlet allocation. Then two experiments are proposed Wikipedia articles and users’ tweets topic modelling. The former one builds up a document topic model, aiming to a topic perspective solution on searching, exploring and recommending articles. The latter one sets up a user topic model, providing a full research and analysis over Twitter users’ interest. The experiment process including data collecting, data pre-processing and model training is fully documented and commented. Further more, the conclusion and application of this paper could be a useful computation tool for social and business research.", "title": "" }, { "docid": "e85e8b54351247d5f20bf1756a133a08", "text": "In high speed ADC, comparator influences the overall performance of ADC directly. This paper describes a very high speed and high resolution preamplifier comparator. The comparator use a self biased differential amp to increase the output current sinking and sourcing capability. The threshold and width of the new comparator can be reduced to the millivolt (mV) range, the resolution and the dynamic characteristics are good. Based on UMC 0. 18um CMOS process model, simulated results show the comparator can work under a 25dB gain, 55MHz speed and 210. 10μW power .", "title": "" }, { "docid": "7e38ba11e394acd7d5f62d6a11253075", "text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.", "title": "" }, { "docid": "b5cc41f689a1792b544ac66a82152993", "text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: [email protected] (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "174fb8b7cb0f45bed49a50ce5ad19c88", "text": "De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r 2005 Elsevier Ltd. All rights reserved. see front matter r 2005 Elsevier Ltd. All rights reserved. jsv.2005.03.007 ding author. Tel.: +1 414 229 3106; fax: +1 414 229 3107. resses: [email protected] (H. Qiu), [email protected] (J. Lee), [email protected] (J. Lin).", "title": "" }, { "docid": "63f20dd528d54066ed0f189e4c435fe7", "text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].", "title": "" }, { "docid": "363a465d626fec38555563722ae92bb1", "text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.", "title": "" }, { "docid": "3dfb419706ae85d232753a085dc145f7", "text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.", "title": "" }, { "docid": "50906e5d648b7598c307b09975daf2d8", "text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.", "title": "" }, { "docid": "48eacd86c14439454525e5a570db083d", "text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.", "title": "" }, { "docid": "3f6cbad208a819fc8fc6a46208197d59", "text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.", "title": "" }, { "docid": "1afdefb31d7b780bb78b59ca8b0d3d8a", "text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.", "title": "" }, { "docid": "07348109c7838032850c039f9a463943", "text": "Ceramics are widely used biomaterials in prosthetic dentistry due to their attractive clinical properties. They are aesthetically pleasing with their color, shade and luster, and they are chemically stable. The main constituents of dental ceramic are Si-based inorganic materials, such as feldspar, quartz, and silica. Traditional feldspar-based ceramics are also referred to as “Porcelain”. The crucial difference between a regular ceramic and a dental ceramic is the proportion of feldspar, quartz, and silica contained in the ceramic. A dental ceramic is a multiphase system, i.e. it contains a dispersed crystalline phase surrounded by a continuous amorphous phase (a glassy phase). Modern dental ceramics contain a higher proportion of the crystalline phase that significantly improves the biomechanical properties of ceramics. Examples of these high crystalline ceramics include lithium disilicate and zirconia.", "title": "" }, { "docid": "affa48f455d5949564302b4c23324458", "text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.", "title": "" }, { "docid": "2795c78d2e81a064173f49887c9b1bb1", "text": "This paper reports a continuously tunable lumped bandpass filter implemented in a third-order coupled resonator configuration. The filter is fabricated on a Borosilicate glass substrate using a surface micromachining technology that offers hightunable passive components. Continuous electrostatic tuning is achieved using three tunable capacitor banks, each consisting of one continuously tunable capacitor and three switched capacitors with pull-in voltage of less than 40 V. The center frequency of the filter is tuned from 1 GHz down to 600 MHz while maintaining a 3-dB bandwidth of 13%-14% and insertion loss of less than 4 dB. The maximum group delay is less than 10 ns across the entire tuning range. The temperature stability of the center frequency from -50°C to 50°C is better than 2%. The measured tuning speed of the filter is better than 80 s, and the is better than 20 dBm, which are in good agreement with simulations. The filter occupies a small size of less than 1.5 cm × 1.1 cm. The implemented filter shows the highest performance amongst the fully integrated microelectromechanical systems filters operating at sub-gigahertz range.", "title": "" }, { "docid": "fd7c514e8681a5292bcbf2bbf6e75664", "text": "In modern days, a large no of automobile accidents are caused due to driver fatigue. To address the problem we propose a vision-based real-time driver fatigue detection system based on eye-tracking, which is an active safety system. Eye tracking is one of the key technologies, for, future driver assistance systems since human eyes contain much information about the driver's condition such as gaze, attention level, and fatigue level. Face and eyes of the driver are first localized and then marked in every frame obtained from the video source. The eyes are tracked in real time using correlation function with an automatically generated online template. Additionally, driver’s distraction and conversations with passengers during driving can lead to serious results. A real-time vision-based model for monitoring driver’s unsafe states, including fatigue state is proposed. A time-based eye glance to mitigate driver distraction is proposed. Keywords— Driver fatigue, Eye-Tracking, Template matching,", "title": "" } ]
scidocsrr
c3a67924b943b0a1671f266cf8d42406
Hybrid CPU-GPU Framework for Network Motifs
[ { "docid": "777d4e55f3f0bbb0544130931006b237", "text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.", "title": "" } ]
[ { "docid": "b9b6fc972d887f64401ec77e3ca1e49b", "text": "We select a menu of seven popular decision theories and embed each theory in five models of stochastic choice, including tremble, Fechner and random utility model. We find that the estimated parameters of decision theories differ significantly when theories are combined with different models. Depending on the selected model of stochastic choice we obtain different rankings of decision theories with regard to their goodness of fit to the data. The fit of all analyzed decision theories improves significantly when they are embedded in a Fechner model of heteroscedastic truncated errors or a random utility model. Copyright  2009 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "cf751df3c52306a106fcd00eef28b1a4", "text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.", "title": "" }, { "docid": "141c28bfbeb5e71dc68d20b6220794c3", "text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.", "title": "" }, { "docid": "083d5b88cc1bf5490a0783a4a94e9fb2", "text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.", "title": "" }, { "docid": "f3a7e0f63d85c069e3f2ab75dcedc671", "text": "The commit processing in a Distributed Real Time Database (DRTDBS) can significantly increase execution time of a transaction. Therefore, designing a good commit protocol is important for the DRTDBS; the main challenge is the adaptation of standard commit protocol into the real time database system and so, decreasing the number of missed transaction in the systems. In these papers we review the basic commit protocols and the other protocols depend on it, for enhancing the transaction performance in DRTDBS. We propose a new commit protocol for reducing the number of transaction that missing their deadline. Keywords— DRTDBS, Commit protocols, Commit processing, 2PC protocol, 3PC protocol, Missed Transaction, Abort Transaction.", "title": "" }, { "docid": "711ad6f6641b916f25f08a32d4a78016", "text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "74a9612c1ca90a9d7b6152d19af53d29", "text": "Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.", "title": "" }, { "docid": "5398b76e55bce3c8e2c1cd89403b8bad", "text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that", "title": "" }, { "docid": "cb3d1448269b29807dc62aa96ff6ad1a", "text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.", "title": "" }, { "docid": "b83e537a2c8dcd24b096005ef0cb3897", "text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.", "title": "" }, { "docid": "f38709ee76dd9988b36812a7801f7336", "text": "BACKGROUND\nMost individuals with mood disorders experience psychiatric and/or medical comorbidity. Available treatment guidelines for major depressive disorder (MDD) and bipolar disorder (BD) have focused on treating mood disorders in the absence of comorbidity. Treating comorbid conditions in patients with mood disorders requires sufficient decision support to inform appropriate treatment.\n\n\nMETHODS\nThe Canadian Network for Mood and Anxiety Treatments (CANMAT) task force sought to prepare evidence- and consensus-based recommendations on treating comorbid conditions in patients with MDD and BD by conducting a systematic and qualitative review of extant data. The relative paucity of studies in this area often required a consensus-based approach to selecting and sequencing treatments.\n\n\nRESULTS\nSeveral principles emerge when managing comorbidity. They include, but are not limited to: establishing the diagnosis, risk assessment, establishing the appropriate setting for treatment, chronic disease management, concurrent or sequential treatment, and measurement-based care.\n\n\nCONCLUSIONS\nEfficacy, effectiveness, and comparative effectiveness research should emphasize treatment and management of conditions comorbid with mood disorders. Clinicians are encouraged to screen and systematically monitor for comorbid conditions in all individuals with mood disorders. The common comorbidity in mood disorders raises fundamental questions about overlapping and discrete pathoetiology.", "title": "" }, { "docid": "af12993c21eb626a7ab8715da1f608c9", "text": "Today, both the military and commercial sectors are placing an increased emphasis on global communications. This has prompted the development of several low earth orbit satellite systems that promise worldwide connectivity and real-time voice communications. This article provides a tutorial overview of the IRIDIUM low earth orbit satellite system and performance results obtained via simulation. First, it presents an overview of key IRIDIUM design parameters and features. Then, it examines the issues associated with routing in a dynamic network topology, focusing on network management and routing algorithm selection. Finally, it presents the results of the simulation and demonstrates that the IRIDIUM system is a robust system capable of meeting published specifications.", "title": "" }, { "docid": "f614df1c1775cd4e2a6927fce95ffa46", "text": "In this paper we have designed and implemented (15, k) a BCH Encoder and decoder using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (2 4 ) with irreducible primitive polynomial x 4 +x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoders are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to K clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. In multiple error correction method, we have implemented (15, 5 ,3 ) ,(15,7, 2) and (15, 11, 1) BCH encoder and decoder using VHDL and the simulation is done using Xilinx ISE 14.2. KeywordsBCH, BER, SNR, BCH Encoder, Decoder VHDL, Error Correction, AWGN, LFSR", "title": "" }, { "docid": "81291c707a102fac24a9d5ab0665238d", "text": "CAN bus is ISO international standard serial communication protocol. It is one of the most widely used fieldbus in the world. It has become the standard bus of embedded industrial control LAN. Ethernet is the most common communication protocol standard that is applied in the existing LAN. Networked industrial control usually adopts fieldbus and Ethernet network, thus the protocol conversion problems of the heterogeneous network composed of Ethernet and CAN bus has become one of the research hotspots in the technology of the industrial control network. STM32F103RC ARM microprocessor was used in the design of the Ethernet-CAN protocol conversion module, the simplified TCP/IP communication protocol uIP protocol was adopted to improve the efficiency of the protocol conversion and guarantee the stability of the system communication. The results of the experiments show that the designed module can realize high-speed and transparent protocol conversion.", "title": "" }, { "docid": "32744d62b45f742cdab55ab462670a39", "text": "The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm’s physical motional behaviors. Keywords—Lynx 6, robot arm, forward kinematics, inverse kinematics, software, DH parameters, 5 DOF ,SSC-32 , simulator.", "title": "" }, { "docid": "189d0b173f8a9e0b3deb21398955dc3c", "text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.", "title": "" }, { "docid": "361dc8037ebc30cd2f37f4460cf43569", "text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.", "title": "" }, { "docid": "822e6c57ea2bbb53d43e44cf1bda8833", "text": "The investigators proposed that transgression-related interpersonal motivations result from 3 psychological parameters: forbearance (abstinence from avoidance and revenge motivations, and maintenance of benevolence), trend forgiveness (reductions in avoidance and revenge, and increases in benevolence), and temporary forgiveness (transient reductions in avoidance and revenge, and transient increases in benevolence). In 2 studies, the investigators examined this 3-parameter model. Initial ratings of transgression severity and empathy were directly related to forbearance but not trend forgiveness. Initial responsibility attributions were inversely related to forbearance but directly related to trend forgiveness. When people experienced high empathy and low responsibility attributions, they also tended to experience temporary forgiveness. The distinctiveness of each of these 3 parameters underscores the importance of studying forgiveness temporally.", "title": "" }, { "docid": "0eff5b8ec08329b4a5d177baab1be512", "text": "In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.", "title": "" } ]
scidocsrr
0693209386b1531a62d4e5726c021392
Loughborough University Institutional Repository Understanding Generation Y and their use of social media : a review and research agenda
[ { "docid": "b4880ddb59730f465f585f3686d1d2b1", "text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.", "title": "" } ]
[ { "docid": "fe397e4124ef517268aaabd999bc02c4", "text": "A new frequency-reconfigurable quasi-Yagi dipole antenna is presented. It consists of a driven dipole element with two varactors in two arms, a director with an additional varactor, a truncated ground plane reflector, a microstrip-to-coplanar-stripline (CPS) transition, and a novel biasing circuit. The effective electrical length of the director element and that of the driven arms are adjusted together by changing the biasing voltages. A 35% continuously frequency-tuning bandwidth, from 1.80 to 2.45 GHz, is achieved. This covers a number of wireless communication systems, including 3G UMTS, US WCS, and WLAN. The length-adjustable director allows the endfire pattern with relatively high gain to be maintained over the entire tuning bandwidth. Measured results show that the gain varies from 5.6 to 7.6 dBi and the front-to-back ratio is better than 10 dB. The H-plane cross polarization is below -15 dB, and that in the E-plane is below -20 dB.", "title": "" }, { "docid": "7e1c0505e40212ef0e8748229654169f", "text": "This article addresses the concept of quality risk in outsourcing. Recent trends in outsourcing extend a contract manufacturer’s (CM’s) responsibility to several functional areas, such as research and development and design in addition to manufacturing. This trend enables an original equipment manufacturer (OEM) to focus on sales and pricing of its product. However, increasing CM responsibilities also suggest that the OEM’s product quality is mainly determined by its CM. We identify two factors that cause quality risk in this outsourcing relationship. First, the CM and the OEM may not be able to contract on quality; second, the OEM may not know the cost of quality to the CM. We characterize the effects of these two quality risk factors on the firms’ profits and on the resulting product quality. We determine how the OEM’s pricing strategy affects quality risk. We show, for example, that the effect of noncontractible quality is higher than the effect of private quality cost information when the OEM sets the sales price after observing the product’s quality. We also show that committing to a sales price mitigates the adverse effect of quality risk. To obtain these results, we develop and analyze a three-stage decision model. This model is also used to understand the impact of recent information technologies on profits and product quality. For example, we provide a decision tree that an OEM can use in deciding whether to invest in an enterprise-wide quality management system that enables accounting of quality-related activities across the supply chain. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 56: 669–685, 2009", "title": "" }, { "docid": "46072702edbe5177e48510fe37b77943", "text": "Due to the explosive increase of online images, content-based image retrieval has gained a lot of attention. The success of deep learning techniques such as convolutional neural networks have motivated us to explore its applications in our context. The main contribution of our work is a novel end-to-end supervised learning framework that learns probability-based semantic-level similarity and feature-level similarity simultaneously. The main advantage of our novel hashing scheme that it is able to reduce the computational cost of retrieval significantly at the state-of-the-art efficiency level. We report on comprehensive experiments using public available datasets such as Oxford, Holidays and ImageNet 2012 retrieval datasets.", "title": "" }, { "docid": "7d0020ff1a7500df1458ddfd568db7b4", "text": "In this position paper, we address the problems of automated road congestion detection and alerting systems and their security properties. We review different theoretical adaptive road traffic control approaches, and three widely deployed adaptive traffic control systems (ATCSs), namely, SCATS, SCOOT and InSync. We then discuss some related research questions, and the corresponding possible approaches, as well as the adversary model and potential attack scenarios. Two theoretical concepts of automated road congestion alarm systems (including system architecture, communication protocol, and algorithms) are proposed on top of ATCSs, such as SCATS, SCOOT and InSync, by incorporating secure wireless vehicle-to-infrastructure (V2I) communications. Finally, the security properties of the proposed system have been discussed and analysed using the ProVerif protocol verification tool.", "title": "" }, { "docid": "0882fc46d918957e73d0381420277bdc", "text": "The term ‘resource use efficiency in agriculture’ may be broadly defined to include the concepts of technical efficiency, allocative efficiency and environmental efficiency. An efficient farmer allocates his land, labour, water and other resources in an optimal manner, so as to maximise his income, at least cost, on sustainable basis. However, there are countless studies showing that farmers often use their resources sub-optimally. While some farmers may attain maximum physical yield per unit of land at a high cost, some others achieve maximum profit per unit of inputs used. Also in the process of achieving maximum yield and returns, some farmers may ignore the environmentally adverse consequences, if any, of their resource use intensity. Logically all enterprising farmers would try to maximise their farm returns by allocating resources in an efficient manner. But as resources (both qualitatively and quantitatively) and managerial efficiency of different farmers vary widely, the net returns per unit of inputs used also vary significantly from farm to farm. Also a farmer’s access to technology, credit, market and other infrastructure and policy support, coupled with risk perception and risk management capacity under erratic weather and price situations would determine his farm efficiency. Moreover, a farmer knowingly or unknowingly may over-exploit his land and water resources for maximising farm income in the short run, thereby resulting in soil and water degradation and rapid depletion of ground water, and also posing a problem of sustainability of agriculture in the long run. In fact, soil degradation, depletion of groundwater and water pollution due to farmers’ managerial inefficiency or otherwise, have a social cost, while farmers who forego certain agricultural practices which cause any such sustainability problem may have a high opportunity cost. Furthermore, a farmer may not be often either fully aware or properly guided and aided for alternative, albeit best possible uses of his scarce resources like land and water. Thus, there are economic as well as environmental aspects of resource use efficiency. In addition, from the point of view of public exchequer, the resource use efficiency would mean that public investment, subsidies and credit for agriculture are", "title": "" }, { "docid": "f611ccffbe10acb7dcbd6cb8f7ffaeaa", "text": "We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to help train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth, KITTI, and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.", "title": "" }, { "docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4", "text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.", "title": "" }, { "docid": "b4ed57258b85ab4d81d5071fc7ad2cc9", "text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.", "title": "" }, { "docid": "04756d4dfc34215c8acb895ecfcfb406", "text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.", "title": "" }, { "docid": "9500dfc92149c5a808cec89b140fc0c3", "text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.", "title": "" }, { "docid": "a2258145e9366bfbf515b3949b2d70fa", "text": "Affect intensity (AI) may reconcile 2 seemingly paradoxical findings: Women report more negative affect than men but equal happiness as men. AI describes people's varying response intensity to identical emotional stimuli. A college sample of 66 women and 34 men was assessed on both positive and negative affect using 4 measurement methods: self-report, peer report, daily report, and memory performance. A principal-components analysis revealed an affect balance component and an AI component. Multimeasure affect balance and AI scores were created, and t tests were computed that showed women to be as happy as and more intense than men. Gender accounted for less than 1% of the variance in happiness but over 13% in AI. Thus, depression findings of more negative affect in women do not conflict with well-being findings of equal happiness across gender. Generally, women's more intense positive emotions balance their higher negative affect.", "title": "" }, { "docid": "47505c95f8a3cf136b3b5a76847990fc", "text": "We present a hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces. Our formulation uses a GPU-based interior point filter to cull away many of the points that do not lie on the boundary. The convex hull of remaining points is computed on a CPU. The GPU-based filter proceeds in an incremental manner and computes a pseudo-hull that is contained inside the convex hull of the original points. The pseudo-hull computation involves only localized operations and maps well to GPU architectures. Furthermore, the underlying approach extends to high dimensional point sets and deforming points. In practice, our culling filter can reduce the number of candidate points by two orders of magnitude. We have implemented the hybrid algorithm on commodity GPUs, and evaluated its performance on several large point sets. In practice, the GPU-based filtering algorithm can cull up to 85M interior points per second on an NVIDIA GeForce GTX 580 and the hybrid algorithm improves the overall performance of convex hull computation by 10 − 27 times (for static point sets) and 22 − 46 times (for deforming point sets).", "title": "" }, { "docid": "a83ba31bdf54c9dec09788bfb1c972fc", "text": "In 1999, ISPOR formed the Quality of Life Special Interest group (QoL-SIG)--Translation and Cultural Adaptation group (TCA group) to stimulate discussion on and create guidelines and standards for the translation and cultural adaptation of patient-reported outcome (PRO) measures. After identifying a general lack of consistency in current methods and published guidelines, the TCA group saw a need to develop a holistic perspective that synthesized the full spectrum of published methods. This process resulted in the development of Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice (PGP), a report on current methods, and an appraisal of their strengths and weaknesses. The TCA Group undertook a review of evidence from current practice, a review of the literature and existing guidelines, and consideration of the issues facing the pharmaceutical industry, regulators, and the broader outcomes research community. Each approach to translation and cultural adaptation was considered systematically in terms of rationale, components, key actors, and the potential benefits and risks associated with each approach and step. The results of this review were subjected to discussion and challenge within the TCA group, as well as consultation with the outcomes research community at large. Through this review, a consensus emerged on a broad approach, along with a detailed critique of the strengths and weaknesses of the differing methodologies. The results of this review are set out as \"Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice\" and are reported in this document.", "title": "" }, { "docid": "ba65c99adc34e05cf0cd1b5618a21826", "text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.", "title": "" }, { "docid": "70c6da9da15ad40b4f64386b890ccf51", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "0ec0b6797069ee5bd737ea787cba43ef", "text": "Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented. MULLER, Henning, et al. Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals. Genève : 1999", "title": "" }, { "docid": "c26e9f486621e37d66bf0925d8ff2a3e", "text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.", "title": "" }, { "docid": "c9c98e50a49bbc781047dc425a2d6fa1", "text": "Understanding wound healing today involves much more than simply stating that there are three phases: \"inflammation, proliferation, and maturation.\" Wound healing is a complex series of reactions and interactions among cells and \"mediators.\" Each year, new mediators are discovered and our understanding of inflammatory mediators and cellular interactions grows. This article will attempt to provide a concise report of the current literature on wound healing by first reviewing the phases of wound healing followed by \"the players\" of wound healing: inflammatory mediators (cytokines, growth factors, proteases, eicosanoids, kinins, and more), nitric oxide, and the cellular elements. The discussion will end with a pictorial essay summarizing the wound-healing process.", "title": "" }, { "docid": "ceedf70c92099fc8612a38f91f2c9507", "text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.", "title": "" }, { "docid": "fd32bf580b316634e44a8c37adfab2eb", "text": "In a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental PL/I optimizing compiler. When the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. Previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoc techniques used took considerable amounts of compile time. We have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. Spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. This new approach produces better object code and takes much less compile time.", "title": "" } ]
scidocsrr
611e2f512e9bcf17f66f557d8a61e545
Visual Analytics for MOOC Data
[ { "docid": "c995426196ad943df2f5a4028a38b781", "text": "Today it is quite common for people to exchange hundreds of comments in online conversations (e.g., blogs). Often, it can be very difficult to analyze and gain insights from such long conversations. To address this problem, we present a visual text analytic system that tightly integrates interactive visualization with novel text mining and summarization techniques to fulfill information needs of users in exploring conversations. At first, we perform a user requirement analysis for the domain of blog conversations to derive a set of design principles. Following these principles, we present an interface that visualizes a combination of various metadata and textual analysis results, supporting the user to interactively explore the blog conversations. We conclude with an informal user evaluation, which provides anecdotal evidence about the effectiveness of our system and directions for further design.", "title": "" } ]
[ { "docid": "eb888ba37e7e97db36c330548569508d", "text": "Since the first online demonstration of Neural Machine Translation (NMT) by LISA (Bahdanau et al., 2014), NMT development has recently moved from laboratory to production systems as demonstrated by several entities announcing rollout of NMT engines to replace their existing technologies. NMT systems have a large number of training configurations and the training process of such systems is usually very long, often a few weeks, so role of experimentation is critical and important to share. In this work, we present our approach to production-ready systems simultaneously with release of online demonstrators covering a large variety of languages ( 12 languages, for32 language pairs). We explore different practical choices: an efficient and evolutive open-source framework; data preparation; network architecture; additional implemented features; tuning for production; etc. We discuss about evaluation methodology, present our first findings and we finally outline further work. Our ultimate goal is to share our expertise to build competitive production systems for ”generic” translation. We aim at contributing to set up a collaborative framework to speed-up adoption of the technology, foster further research efforts and enable the delivery and adoption to/by industry of use-case specific engines integrated in real production workflows. Mastering of the technology would allow us to build translation engines suited for particular needs, outperforming current simplest/uniform systems.", "title": "" }, { "docid": "821be0a049a74abf5b009b012022af2f", "text": "BACKGROUND\nIn theory, infections that arise after female genital mutilation (FGM) in childhood might ascend to the internal genitalia, causing inflammation and scarring and subsequent tubal-factor infertility. Our aim was to investigate this possible association between FGM and primary infertility.\n\n\nMETHODS\nWe did a hospital-based case-control study in Khartoum, Sudan, to which we enrolled women (n=99) with primary infertility not caused by hormonal or iatrogenic factors (previous abdominal surgery), or the result of male-factor infertility. These women underwent diagnostic laparoscopy. Our controls were primigravidae women (n=180) recruited from antenatal care. We used exact conditional logistic regression, stratifying for age and controlling for socioeconomic status, level of education, gonorrhoea, and chlamydia, to compare these groups with respect to FGM.\n\n\nFINDINGS\nOf the 99 infertile women examined, 48 had adnexal pathology indicative of previous inflammation. After controlling for covariates, these women had a significantly higher risk than controls of having undergone the most extensive form of FGM, involving the labia majora (odds ratio 4.69, 95% CI 1.49-19.7). Among women with primary infertility, both those with tubal pathology and those with normal laparoscopy findings were at a higher risk than controls of extensive FGM, both with borderline significance (p=0.054 and p=0.055, respectively). The anatomical extent of FGM, rather than whether or not the vulva had been sutured or closed, was associated with primary infertility.\n\n\nINTERPRETATION\nOur findings indicate a positive association between the anatomical extent of FGM and primary infertility. Laparoscopic postinflammatory adnexal changes are not the only explanation for this association, since cases without such pathology were also affected. The association between FGM and primary infertility is highly relevant for preventive work against this ancient practice.", "title": "" }, { "docid": "5d6c2580602945084d5a643c335c40f2", "text": "Probabilistic topic models are a suite of algorithms whose aim is to discover the hidden thematic structure in large archives of documents. In this article, we review the main ideas of this field, survey the current state-of-the-art, and describe some promising future directions. We first describe latent Dirichlet allocation (LDA) [8], which is the simplest kind of topic model. We discuss its connections to probabilistic modeling, and describe two kinds of algorithms for topic discovery. We then survey the growing body of research that extends and applies topic models in interesting ways. These extensions have been developed by relaxing some of the statistical assumptions of LDA, incorporating meta-data into the analysis of the documents, and using similar kinds of models on a diversity of data types such as social networks, images and genetics. Finally, we give our thoughts as to some of the important unexplored directions for topic modeling. These include rigorous methods for checking models built for data exploration, new approaches to visualizing text and other high dimensional data, and moving beyond traditional information engineering applications towards using topic models for more scientific ends.", "title": "" }, { "docid": "66e8940044bb58971da01cc059b8ef09", "text": "The use of Bayesian methods for data analysis is creating a revolution in fields ranging from genetics to marketing. Yet, results of our literature review, including more than 10,000 articles published in 15 journals from January 2001 and December 2010, indicate that Bayesian approaches are essentially absent from the organizational sciences. Our article introduces organizational science researchers to Bayesian methods and describes why and how they should be used. We use multiple linear regression as the framework to offer a step-by-step demonstration, including the use of software, regarding how to implement Bayesian methods. We explain and illustrate how to determine the prior distribution, compute the posterior distribution, possibly accept the null value, and produce a write-up describing the entire Bayesian process, including graphs, results, and their interpretation. We also offer a summary of the advantages of using Bayesian analysis and examples of how specific published research based on frequentist analysis-based approaches failed to benefit from the advantages offered by a Bayesian approach and how using Bayesian analyses would have led to richer and, in some cases, different substantive conclusions. We hope that our article will serve as a catalyst for the adoption of Bayesian methods in organizational science research.", "title": "" }, { "docid": "162823edcbd50579a1d386f88931d59d", "text": "Elevated liver enzymes are a common scenario encountered by physicians in clinical practice. For many physicians, however, evaluation of such a problem in patients presenting with no symptoms can be challenging. Evidence supporting a standardized approach to evaluation is lacking. Although alterations of liver enzymes could be a normal physiological phenomenon in certain cases, it may also reflect potential liver injury in others, necessitating its further assessment and management. In this article, we provide a guide to primary care clinicians to interpret abnormal elevation of liver enzymes in asymptomatic patients using a step-wise algorithm. Adopting a schematic approach that classifies enzyme alterations on the basis of pattern (hepatocellular, cholestatic and isolated hyperbilirubinemia), we review an approach to abnormal alteration of liver enzymes within each section, the most common causes of enzyme alteration, and suggest initial investigations.", "title": "" }, { "docid": "f008e38cd63db0e4cf90705cc5e8860e", "text": "6  Abstract— The purpose of this paper is to propose a MATLAB/ Simulink simulators for PV cell/module/array based on the Two-diode model of a PV cell.This model is known to have better accuracy at low irradiance levels which allows for more accurate prediction of PV systems performance.To reduce computational time , the input parameters are reduced as the values of Rs and Rp are estimated by an efficient iteration method. Furthermore ,all of the inputs to the simulators are information available on a standard PV module datasheet. The present paper present first abrief introduction to the behavior and functioning of a PV device and write the basic equation of the two-diode model,without the intention of providing an indepth analysis of the photovoltaic phenomena and the semicondutor physics. The introduction on PV devices is followed by the modeling and simulation of PV cell/PV module/PV array, which is the main subject of this paper. A MATLAB Simulik based simulation study of PV cell/PV module/PV array is carried out and presented .The simulation model makes use of the two-diode model basic circuit equations of PV solar cell, taking the effect of sunlight irradiance and cell temperature into consideration on the output current I-V characteristic and output power P-V characteristic . A particular typical 50W solar panel was used for model evaluation. The simulation results , compared with points taken directly from the data sheet and curves pubblished by the manufacturers, show excellent correspondance to the model.", "title": "" }, { "docid": "33f86056827e1e8958ab17e11d7e4136", "text": "The successful integration of Information and Communications Technology (ICT) into the teaching and learning of English Language is largely dependent on the level of teacher’s ICT competence, the actual utilization of ICT in the language classroom and factors that challenge teachers to use it in language teaching. The study therefore assessed the Secondary School English language teachers’ ICT literacy, the extent of ICT utilization in English language teaching and the challenges that prevent language teachers to integrate ICT in teaching. To answer the problems, three sets of survey questionnaires were distributed to 30 English teachers in the 11 schools of Cluster 1 (CarCanMadCarLan). Data gathered were analyzed using descriptive statistics and frequency count. The results revealed that the teachers’ ICT literacy was moderate. The findings provided evidence that there was only a limited use of ICT in language teaching. Feedback gathered from questionnaires show that teachers faced many challenges that demotivate them from using ICT in language activities. Based on these findings, it is recommended the teachers must be provided with intensive ICT-based trainings to equip them with knowledge of ICT and its utilization in language teaching. School administrators as well as stakeholders may look for interventions to upgrade school’s ICTbased resources for its optimum use in teaching and learning. Most importantly, a larger school-wide ICT development plan may be implemented to ensure coherence of ICT implementation in the teaching-learning activities. ‘ICT & Innovations in Education’ International Journal International Electronic Journal | ISSN 2321 – 7189 | www.ictejournal.com Volume 2, Issue 1 | February 2014", "title": "" }, { "docid": "5f92491cb7da547ba3ea6945832342ac", "text": "SwitchKV is a new key-value store system design that combines high-performance cache nodes with resourceconstrained backend nodes to provide load balancing in the face of unpredictable workload skew. The cache nodes absorb the hottest queries so that no individual backend node is over-burdened. Compared with previous designs, SwitchKV exploits SDN techniques and deeply optimized switch hardware to enable efficient contentbased routing. Programmable network switches keep track of cached keys and route requests to the appropriate nodes at line speed, based on keys encoded in packet headers. A new hybrid caching strategy keeps cache and switch forwarding rules updated with low overhead and ensures that system load is always well-balanced under rapidly changing workloads. Our evaluation results demonstrate that SwitchKV can achieve up to 5× throughput and 3× latency improvements over traditional system designs.", "title": "" }, { "docid": "a2a4936ca3600dc4fb2369c43ffc9016", "text": "Intuitive and efficient retrieval of motion capture data is essential for effective use of motion capture databases. In this paper, we describe a system that allows the user to retrieve a particular sequence by performing an approximation of the motion with an instrumented puppet. This interface is intuitive because both adults and children have experience playacting with puppets and toys to express particular behaviors or to tell stories with style and emotion. The puppet has 17 degrees of freedom and can therefore represent a variety of motions. We develop a novel similarity metric between puppet and human motion by computing the reconstruction errors of the puppet motion in the latent space of the human motion and those of the human motion in the latent space of the puppet motion. This metric works even for relatively large databases. We conducted a user study of the system and subjects could find the desired motion with reasonable accuracy from a database consisting of everyday, exercise, and acrobatic behaviors.", "title": "" }, { "docid": "59aa4318fa39c1d6ec086af7041148b2", "text": "Two of the most important outcomes of learning analytics are predicting students’ learning and providing effective feedback. Learning Management Systems (LMS), which are widely used to support online and face-to-face learning, provide extensive research opportunities with detailed records of background data regarding users’ behaviors. The purpose of this study was to investigate the effects of undergraduate students’ LMS learning behaviors on their academic achievements. In line with this purpose, the participating students’ online learning behaviors in LMS were examined by using learning analytics for 14 weeks, and the relationship between students’ behaviors and their academic achievements was analyzed, followed by an analysis of their views about the influence of LMS on their academic achievement. The present study, in which quantitative and qualitative data were collected, was carried out with the explanatory mixed method. A total of 71 undergraduate students participated in the study. The results revealed that the students used LMSs as a support to face-to-face education more intensively on course days (at the beginning of the related lessons and at nights on course days) and that they activated the content elements the most. Lastly, almost all the students agreed that LMSs helped increase their academic achievement only when LMSs included such features as effectiveness, interaction, reinforcement, attractive design, social media support, and accessibility.", "title": "" }, { "docid": "3c8cc4192ee6ddd126e53c8ab242f396", "text": "There are several approaches for automated functional web testing and the choice among them depends on a number of factors, including the tools used for web testing and the costs associated with their adoption. In this paper, we present an empirical cost/benefit analysis of two different categories of automated functional web testing approaches: (1) capture-replay web testing (in particular, using Selenium IDE); and, (2) programmable web testing (using Selenium WebDriver). On a set of six web applications, we evaluated the costs of applying these testing approaches both when developing the initial test suites from scratch and when the test suites are maintained, upon the release of a new software version. Results indicate that, on the one hand, the development of the test suites is more expensive in terms of time required (between 32% and 112%) when the programmable web testing approach is adopted, but on the other hand, test suite maintenance is less expensive when this approach is used (with a saving between 16% and 51%). We found that, in the majority of the cases, after a small number of releases (from one to three), the cumulative cost of programmable web testing becomes lower than the cost involved with capture-replay web testing and the cost saving gets amplified over the successive releases.", "title": "" }, { "docid": "7f65d625ca8f637a6e2e9cb7006d1778", "text": "Recent work in machine learning for information extraction has focused on two distinct sub-problems: the conventional problem of filling template slots from natural language text, and the problem of wrapper induction, learning simple extraction procedures (“wrappers”) for highly structured text such as Web pages produced by CGI scripts. For suitably regular domains, existing wrapper induction algorithms can efficiently learn wrappers that are simple and highly accurate, but the regularity bias of these algorithms makes them unsuitable for most conventional information extraction tasks. Boosting is a technique for improving the performance of a simple machine learning algorithm by repeatedly applying it to the training set with different example weightings. We describe an algorithm that learns simple, low-coverage wrapper-like extraction patterns, which we then apply to conventional information extraction problems using boosting. The result is BWI, a trainable information extraction system with a strong precision bias and F1 performance better than state-of-the-art techniques in many domains.", "title": "" }, { "docid": "78982bfdcf476081bd708c8aa2e5c5bd", "text": "Simultaneous Localization And Mapping (SLAM) is a fundamental problem in mobile robotics. While sparse point-based SLAM methods provide accurate camera localization, the generated maps lack semantic information. On the other hand, state of the art object detection methods provide rich information about entities present in the scene from a single image. This work incorporates a real-time deep-learned object detector to the monocular SLAM framework for representing generic objects as quadrics that permit detections to be seamlessly integrated while allowing the real-time performance. Finer reconstruction of an object, learned by a CNN network, is also incorporated and provides a shape prior for the quadric leading further refinement. To capture the dominant structure of the scene, additional planar landmarks are detected by a CNN-based plane detector and modelled as landmarks in the map. Experiments show that the introduced plane and object landmarks and the associated constraints, using the proposed monocular plane detector and incorporated object detector, significantly improve camera localization and lead to a richer semantically more meaningful map.", "title": "" }, { "docid": "cefcd78be7922f4349f1bb3aa59d2e1d", "text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …", "title": "" }, { "docid": "33c06f0ee7d3beb0273a47790f2a84cd", "text": "This study presents the clinical results of a surgical technique that expands a narrow ridge when its orofacial width precludes the placement of dental implants. In 170 people, 329 implants were placed in sites needing ridge enlargement using the endentulous ridge expansion procedure. This technique involves a partial-thickness flap, crestal and vertical intraosseous incisions into the ridge, and buccal displacement of the buccal cortical plate, including a portion of the underiying spongiosa. Implants were placed in the expanded ridge and allowed to heal for 4 to 5 months. When indicated, the implants were exposed during a second-stage surgery to allow visualization of the implant site. Occlusal loading was applied during the following 3 to 5 months by provisional prostheses. The final phase was the placement of the permanent prostheses. The results yielded a success rate of 98.8%.", "title": "" }, { "docid": "e546f81fbdc57765956c22d94c9f54ac", "text": "Internet technology is revolutionizing education. Teachers are developing massive open online courses (MOOCs) and using innovative practices such as flipped learning in which students watch lectures at home and engage in hands-on, problem solving activities in class. This work seeks to explore the design space afforded by these novel educational paradigms and to develop technology for improving student learning. Our design, based on the technique of adaptive content review, monitors student attention during educational presentations and determines which lecture topic students might benefit the most from reviewing. An evaluation of our technology within the context of an online art history lesson demonstrated that adaptively reviewing lesson content improved student recall abilities 29% over a baseline system and was able to match recall gains achieved by a full lesson review in less time. Our findings offer guidelines for a novel design space in dynamic educational technology that might support both teachers and online tutoring systems.", "title": "" }, { "docid": "76c42d10b008bdcbfd90d6eb238280c9", "text": "In this paper a review of architectures suitable for nonlinear real-time audio signal processing is presented. The computational and structural complexity of neural networks (NNs) represent in fact, the main drawbacks that can hinder many practical NNs multimedia applications. In particular e,cient neural architectures and their learning algorithm for real-time on-line audio processing are discussed. Moreover, applications in the -elds of (1) audio signal recovery, (2) speech quality enhancement, (3) nonlinear transducer linearization, (4) learning based pseudo-physical sound synthesis, are brie1y presented and discussed. c © 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "b31ce7aa527336d10a5ddb2540e9c61c", "text": "OBJECTIVE\nOptimal mental health care is dependent upon sensitive and early detection of mental health problems. We have introduced a state-of-the-art method for the current study for remote behavioral monitoring that transports assessment out of the clinic and into the environments in which individuals negotiate their daily lives. The objective of this study was to examine whether the information captured with multimodal smartphone sensors can serve as behavioral markers for one's mental health. We hypothesized that (a) unobtrusively collected smartphone sensor data would be associated with individuals' daily levels of stress, and (b) sensor data would be associated with changes in depression, stress, and subjective loneliness over time.\n\n\nMETHOD\nA total of 47 young adults (age range: 19-30 years) were recruited for the study. Individuals were enrolled as a single cohort and participated in the study over a 10-week period. Participants were provided with smartphones embedded with a range of sensors and software that enabled continuous tracking of their geospatial activity (using the Global Positioning System and wireless fidelity), kinesthetic activity (using multiaxial accelerometers), sleep duration (modeled using device-usage data, accelerometer inferences, ambient sound features, and ambient light levels), and time spent proximal to human speech (i.e., speech duration using microphone and speech detection algorithms). Participants completed daily ratings of stress, as well as pre- and postmeasures of depression (Patient Health Questionnaire-9; Spitzer, Kroenke, & Williams, 1999), stress (Perceived Stress Scale; Cohen et al., 1983), and loneliness (Revised UCLA Loneliness Scale; Russell, Peplau, & Cutrona, 1980).\n\n\nRESULTS\nMixed-effects linear modeling showed that sensor-derived geospatial activity (p < .05), sleep duration (p < .05), and variability in geospatial activity (p < .05), were associated with daily stress levels. Penalized functional regression showed associations between changes in depression and sensor-derived speech duration (p < .05), geospatial activity (p < .05), and sleep duration (p < .05). Changes in loneliness were associated with sensor-derived kinesthetic activity (p < .01).\n\n\nCONCLUSIONS AND IMPLICATIONS FOR PRACTICE\nSmartphones can be harnessed as instruments for unobtrusive monitoring of several behavioral indicators of mental health. Creative leveraging of smartphone sensing could provide novel opportunities for close-to-invisible psychiatric assessment at a scale and efficiency that far exceeds what is currently feasible with existing assessment technologies.", "title": "" }, { "docid": "94f94af75b17c0b4a2ad59908e07e462", "text": "Metric learning has the aim to improve classification accuracy by learning a distance measure which brings data points from the same class closer together and pushes data points from different classes further apart. Recent research has demonstrated that metric learning approaches can also be applied to trees, such as molecular structures, abstract syntax trees of computer programs, or syntax trees of natural language, by learning the cost function of an edit distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree. However, learning such costs directly may yield an edit distance which violates metric axioms, is challenging to interpret, and may not generalize well. In this contribution, we propose a novel metric learning approach for trees which we call embedding edit distance learning (BEDL) and which learns an edit distance indirectly by embedding the tree nodes as vectors, such that the Euclidean distance between those vectors supports class discrimination. We learn such embeddings by reducing the distance to prototypical trees from the same class and increasing the distance to prototypical trees from different classes. In our experiments, we show that BEDL improves upon the state-of-the-art in metric learning for trees on six benchmark data sets, ranging from computer science over biomedical data to a natural-language processing data set containing over 300,000 nodes.", "title": "" }, { "docid": "5547f8ad138a724c2cc05ce65f50ebd2", "text": "As machine learning (ML) technology continues to spread by rapid evolution, the system or service using Machine Learning technology, called ML product, makes big impact on our life, society and economy. Meanwhile, Quality Assurance (QA) for ML product is quite more difficult than hardware, non-ML software and service because performance of ML technology is much better than non-ML technology in exchange for the characteristics of ML product, e.g. low explainability. We must keep rapid evolution and reduce quality risk of ML product simultaneously. In this paper, we show a Quality Assurance Framework for Machine Learning product. Scope of QA in this paper is limited to product evaluation. First, a policy of QA for ML Product is proposed. General principles of product evaluation is introduced and applied to ML product evaluation as a part of the policy. They are composed of A-ARAI: Allowability, Achievability, Robustness, Avoidability and Improvability. A strategy of ML Product Evaluation is constructed as another part of the policy. Quality Integrity Level for ML product is also modelled. Second, we propose a test architecture of ML product testing. It consists of test levels and fundamental test types of ML product testing, including snapshot testing, learning testing and confrontation testing. Finally, we defines QA activity levels for ML product.", "title": "" } ]
scidocsrr
794d168e82a8e468067707d0e2c62f40
Signed networks in social media
[ { "docid": "31a1a5ce4c9a8bc09cbecb396164ceb4", "text": "In trying out this hypothesis we shall understand by attitude the positive or negative relationship of a person p to another person o or to an impersonal entity x which may be a situation, an event, an idea, or a thing, etc. Examples are: to like, to love, to esteem, to value, and their opposites. A positive relation of this kind will be written L, a negative one ~L. Thus, pLo means p likes, loves, or values o, or, expressed differently, o is positive for p.", "title": "" } ]
[ { "docid": "4d4219d8e4fd1aa86724f3561aea414b", "text": "Trajectory search has long been an attractive and challenging topic which blooms various interesting applications in spatial-temporal databases. In this work, we study a new problem of searching trajectories by locations, in which context the query is only a small set of locations with or without an order specified, while the target is to find the k Best-Connected Trajectories (k-BCT) from a database such that the k-BCT best connect the designated locations geographically. Different from the conventional trajectory search that looks for similar trajectories w.r.t. shape or other criteria by using a sample query trajectory, we focus on the goodness of connection provided by a trajectory to the specified query locations. This new query can benefit users in many novel applications such as trip planning.\n In our work, we firstly define a new similarity function for measuring how well a trajectory connects the query locations, with both spatial distance and order constraint being considered. Upon the observation that the number of query locations is normally small (e.g. 10 or less) since it is impractical for a user to input too many locations, we analyze the feasibility of using a general-purpose spatial index to achieve efficient k-BCT search, based on a simple Incremental k-NN based Algorithm (IKNN). The IKNN effectively prunes and refines trajectories by using the devised lower bound and upper bound of similarity. Our contributions mainly lie in adapting the best-first and depth-first k-NN algorithms to the basic IKNN properly, and more importantly ensuring the efficiency in both search effort and memory usage. An in-depth study on the adaption and its efficiency is provided. Further optimization is also presented to accelerate the IKNN algorithm. Finally, we verify the efficiency of the algorithm by extensive experiments.", "title": "" }, { "docid": "a65d1881f5869f35844064d38b684ac8", "text": "Skilled artists, using traditional media or modern computer painting tools, can create a variety of expressive styles that are very appealing in still images, but have been unsuitable for animation. The key difficulty is that existing techniques lack adequate temporal coherence to animate these styles effectively. Here we augment the range of practical animation styles by extending the guided texture synthesis method of Image Analogies [Hertzmann et al. 2001] to create temporally coherent animation sequences. To make the method art directable, we allow artists to paint portions of keyframes that are used as constraints. The in-betweens calculated by our method maintain stylistic continuity and yet change no more than necessary over time.", "title": "" }, { "docid": "350f7694198d1b2c0a2c8cc1b75fc3c2", "text": "We present a methodology, called fast repetition rate (FRR) fluorescence, that measures the functional absorption cross-section (sigmaPS II) of Photosystem II (PS II), energy transfer between PS II units (p), photochemical and nonphotochemical quenching of chlorophyll fluorescence, and the kinetics of electron transfer on the acceptor side of PS II. The FRR fluorescence technique applies a sequence of subsaturating excitation pulses ('flashlets') at microsecond intervals to induce fluorescence transients. This approach is extremely flexible and allows the generation of both single-turnover (ST) and multiple-turnover (MT) flashes. Using a combination of ST and MT flashes, we investigated the effect of excitation protocols on the measured fluorescence parameters. The maximum fluorescence yield induced by an ST flash applied shortly (10 &mgr;s to 5 ms) following an MT flash increased to a level comparable to that of an MT flash, while the functional absorption cross-section decreased by about 40%. We interpret this phenomenon as evidence that an MT flash induces an increase in the fluorescence-rate constant, concomitant with a decrease in the photosynthetic-rate constant in PS II reaction centers. The simultaneous measurements of sigmaPS II, p, and the kinetics of Q-A reoxidation, which can be derived only from a combination of ST and MT flash fluorescence transients, permits robust characterization of the processes of photosynthetic energy-conversion.", "title": "" }, { "docid": "2f83b2ef8f71c56069304b0962074edc", "text": "Abstract: Printed antennas are becoming one of the most popular designs in personal wireless communications systems. In this paper, the design of a novel tapered meander line antenna is presented. The design analysis and characterization of the antenna is performed using the finite difference time domain technique and experimental verifications are performed to ensure the effectiveness of the numerical model. The new design features an operating frequency of 2.55 GHz with a 230 MHz bandwidth, which supports future generations of mobile communication systems.", "title": "" }, { "docid": "5d851687f9a69db7419ff054623f03d8", "text": "Attention mechanisms are a design trend of deep neural networks that stands out in various computer vision tasks. Recently, some works have attempted to apply attention mechanisms to single image super-resolution (SR) tasks. However, they apply the mechanisms to SR in the same or similar ways used for high-level computer vision problems without much consideration of the different nature between SR and other problems. In this paper, we propose a new attention method, which is composed of new channelwise and spatial attention mechanisms optimized for SR and a new fused attention to combine them. Based on this, we propose a new residual attention module (RAM) and a SR network using RAM (SRRAM). We provide in-depth experimental analysis of different attention mechanisms in SR. It is shown that the proposed method can construct both deep and lightweight SR networks showing improved performance in comparison to existing state-of-the-art methods.", "title": "" }, { "docid": "8eb96feea999ce77f2b56b7941af2587", "text": "The term cyber security is often used interchangeably with the term information security. This paper argues that, although there is a substantial overlap between cyber security and information security, these two concepts are not totally analogous. Moreover, the paper posits that cyber security goes beyond the boundaries of traditional information security to include not only the protection of information resources, but also that of other assets, including the person him/herself. In information security, reference to the human factor usually relates to the role(s) of humans in the security process. In cyber security this factor has an additional dimension, namely, the humans as potential targets of cyber attacks or even unknowingly participating in a cyber attack. This additional dimension has ethical implications for society as a whole, since the protection of certain vulnerable groups, for example children, could be seen as a societal responsibility. a 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0cd1f01d1b2a5afd8c6eba13ef5082fa", "text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.", "title": "" }, { "docid": "1c4e1feed1509e0a003dca23ad3a902c", "text": "With an expansive and ubiquitously available gold mine of educational data, Massive Open Online courses (MOOCs) have become the an important foci of learning analytics research. The hope is that this new surge of development will bring the vision of equitable access to lifelong learning opportunities within practical reach. MOOCs offer many valuable learning experiences to students, from video lectures, readings, assignments and exams, to opportunities to connect and collaborate with others through threaded discussion forums and other Web 2.0 technologies. Nevertheless, despite all this potential, MOOCs have so far failed to produce evidence that this potential is being realized in the current instantiation of MOOCs. In this work, we primarily explore video lecture interaction in Massive Open Online Courses (MOOCs), which is central to student learning experience on these educational platforms. As a research contribution, we operationalize video lecture clickstreams of students into behavioral actions, and construct a quantitative information processing index, that can aid instructors to better understand MOOC hurdles and reason about unsatisfactory learning outcomes. Our results illuminate the effectiveness of developing such a metric inspired by cognitive psychology, towards answering critical questions regarding students’ engagement, their future click interactions and participation trajectories that lead to in-video dropouts. We leverage recurring click behaviors to differentiate distinct video watching profiles for students in MOOCs. Additionally, we discuss about prediction of complete course dropouts, incorporating diverse perspectives from statistics and machine learning, to offer a more nuanced view into how the second generation of MOOCs be benefited, if course instructors were to better comprehend factors that lead to student attrition. Implications for research and practice are discussed.", "title": "" }, { "docid": "0d30cfe8755f146ded936aab55cb80d3", "text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).", "title": "" }, { "docid": "e4c33ca67526cb083cae1543e5564127", "text": "Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user's sequential behavior in the current session, whereas the user's main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user's sequential behavior and capture the user's main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user's sequential behavior and main purpose simultaneously.", "title": "" }, { "docid": "9464f2e308b5c8ab1f2fac1c008042c0", "text": "Data governance has become a significant approach that drives decision making in public organisations. Thus, the loss of data governance is a concern to decision makers, acting as a barrier to achieving their business plans in many countries and also influencing both operational and strategic decisions. The adoption of cloud computing is a recent trend in public sector organisations, that are looking to move their data into the cloud environment. The literature shows that data governance is one of the main concerns of decision makers who are considering adopting cloud computing; it also shows that data governance in general and for cloud computing in particular is still being researched and requires more attention from researchers. However, in the absence of a cloud data governance framework, this paper seeks to develop a conceptual framework for cloud data governance-driven decision making in the public sector.", "title": "" }, { "docid": "96af2e34acf9f1e9c0c57cc24795d0f9", "text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.", "title": "" }, { "docid": "fcbb5b1adf14b443ef0d4a6f939140fe", "text": "In this paper we make the case for IoT edge offloading, which strives to exploit the resources on edge computing devices by offloading fine-grained computation tasks from the cloud closer to the users and data generators (i.e., IoT devices). The key motive is to enhance performance, security and privacy for IoT services. Our proposal bridges the gap between cloud computing and IoT by applying a divide and conquer approach over the multi-level (cloud, edge and IoT) information pipeline. To validate the design of IoT edge offloading, we developed a unikernel-based prototype and evaluated the system under various hardware and network conditions. Our experimentation has shown promising results and revealed the limitation of existing IoT hardware and virtualization platforms, shedding light on future research of edge computing and IoT.", "title": "" }, { "docid": "11a1c92620d58100194b735bfc18c695", "text": "Stabilization by static output feedback (SOF) is a long-standing open problem in control: given an n by n matrix A and rectangular matrices B and C, find a p by q matrix K such that A + BKC is stable. Low-order controller design is a practically important problem that can be cast in the same framework, with (p+k)(q+k) design parameters instead of pq, where k is the order of the controller, and k << n. Robust stabilization further demands stability in the presence of perturbation and satisfactory transient as well as asymptotic system response. We formulate two related nonsmooth, nonconvex optimization problems over K, respectively with the following objectives: minimization of the -pseudospectral abscissa of A+BKC, for a fixed ≥ 0, and maximization of the complex stability radius of A + BKC. Finding global optimizers of these functions is hard, so we use a recently developed gradient sampling method that approximates local optimizers. For modest-sized systems, local optimization can be carried out from a large number of starting points with no difficulty. The best local optimizers may then be investigated as candidate solutions to the static output feedback or low-order controller design problem. We show results for two problems published in the control literature. The first is a turbo-generator example that allows us to show how different choices of the optimization objective lead to stabilization with qualitatively different properties, conveniently visualized by pseudospectral plots. The second is a well known model of a Boeing 767 aircraft at a flutter condition. For this problem, we are not aware of any SOF stabilizing K published in the literature. Our method was not only able to find an SOF stabilizing K, but also to locally optimize the complex stability radius of A + BKC. We also found locally optimizing order–1 and order–2 controllers for this problem. All optimizers are visualized using pseudospectral plots.", "title": "" }, { "docid": "080e7880623a09494652fd578802c156", "text": "Whole-cell biosensors are a good alternative to enzyme-based biosensors since they offer the benefits of low cost and improved stability. In recent years, live cells have been employed as biosensors for a wide range of targets. In this review, we will focus on the use of microorganisms that are genetically modified with the desirable outputs in order to improve the biosensor performance. Different methodologies based on genetic/protein engineering and synthetic biology to construct microorganisms with the required signal outputs, sensitivity, and selectivity will be discussed.", "title": "" }, { "docid": "8724a0d439736a419835c1527f01fe43", "text": "Shuffled frog-leaping algorithm (SFLA) is a new memetic meta-heuristic algorithm with efficient mathematical function and global search capability. Traveling salesman problem (TSP) is a complex combinatorial optimization problem, which is typically used as benchmark for testing the effectiveness as well as the efficiency of a newly proposed optimization algorithm. When applying the shuffled frog-leaping algorithm in TSP, memeplex and submemeplex are built and the evolution of the algorithm, especially the local exploration in submemeplex is carefully adapted based on the prototype SFLA. Experimental results show that the shuffled frog leaping algorithm is efficient for small-scale TSP. Particularly for TSP with 51 cities, the algorithm manages to find six tours which are shorter than the optimal tour provided by TSPLIB. The shortest tour length is 428.87 instead of 429.98 which can be found cited elsewhere.", "title": "" }, { "docid": "827396df94e0bca08cee7e4d673044ef", "text": "Localization in Wireless Sensor Networks (WSNs) is regarded as an emerging technology for numerous cyberphysical system applications, which equips wireless sensors with the capability to report data that is geographically meaningful for location based services and applications. However, due to the increasingly pervasive existence of smart sensors in WSN, a single localization technique that affects the overall performance is not sufficient for all applications. Thus, there have been many significant advances on localization techniques in WSNs in the past few years. The main goal in this paper is to present the state-of-the-art research results and approaches proposed for localization in WSNs. Specifically, we present the recent advances on localization techniques in WSNs by considering a wide variety of factors and categorizing them in terms of data processing (centralized vs. distributed), transmission range (range free vs. range based), mobility (static vs. mobile), operating environments (indoor vs. outdoor), node density (sparse vs dense), routing, algorithms, etc. The recent localization techniques in WSNs are also summarized in the form of tables. With this paper, readers can have a more thorough understanding of localization in sensor networks, as well as research trends and future research directions in this area.", "title": "" }, { "docid": "fb7fc0398c951a584726a31ae307c53c", "text": "In this paper, we use a advanced method called Faster R-CNN to detect traffic signs. This new method represents the highest level in object recognition, which don't need to extract image feature manually anymore and can segment image to get candidate region proposals automatically. Our experiment is based on a traffic sign detection competition in 2016 by CCF and UISEE company. The mAP(mean average precision) value of the result is 0.3449 that means Faster R-CNN can indeed be applied in this field. Even though the experiment did not achieve the best results, we explore a new method in the area of the traffic signs detection. We believe that we can get a better achievement in the future.", "title": "" }, { "docid": "45885c7c86a05d2ba3979b689f7ce5c8", "text": "Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or problem-specific proposals hand-crafted by an expert. In this paper, we propose ANICE-MC, a novel method to automatically design efficient Markov chain kernels tailored for a specific domain. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution. Then, we leverage flexible volume preserving flows to obtain parametric kernels for MCMC. Using a bootstrap approach, we show how to train efficient Markov chains to sample from a prescribed posterior distribution by iteratively improving the quality of both the model and the samples. Empirical results demonstrate that A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of deep neural networks, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo.", "title": "" }, { "docid": "190ec7d12156c298e8a545a5655df969", "text": "The Linked Movie Database (LinkedMDB) project provides a demonstration of the first open linked dataset connecting several major existing (and highly popular) movie web resources. The database exposed by LinkedMDB contains millions of RDF triples with hundreds of thousands of RDF links to existing web data sources that are part of the growing Linking Open Data cloud, as well as to popular movierelated web pages such as IMDb. LinkedMDB uses a novel way of creating and maintaining large quantities of high quality links by employing state-of-the-art approximate join techniques for finding links, and providing additional RDF metadata about the quality of the links and the techniques used for deriving them.", "title": "" } ]
scidocsrr
abbd4694897bb5c4fd5866f00de2d593
Aesthetics and credibility in web site design
[ { "docid": "e7c8abf3387ba74ca0a6a2da81a26bc4", "text": "An experiment was conducted to test the relationships between users' perceptions of a computerized system's beauty and usability. The experiment used a computerized application as a surrogate for an Automated Teller Machine (ATM). Perceptions were elicited before and after the participants used the system. Pre-experimental measures indicate strong correlations between system's perceived aesthetics and perceived usability. Post-experimental measures indicated that the strong correlation remained intact. A multivariate analysis of covariance revealed that the degree of system's aesthetics affected the post-use perceptions of both aesthetics and usability, whereas the degree of actual usability had no such effect. The results resemble those found by social psychologists regarding the effect of physical attractiveness on the valuation of other personality attributes. The ®ndings stress the importance of studying the aesthetic aspect of human±computer interaction (HCI) design and its relationships to other design dimensions. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" } ]
[ { "docid": "36a615660b8f0c60bef06b5a57887bd1", "text": "Quantum cryptography is an emerging technology in which two parties can secure network communications by applying the phenomena of quantum physics. The security of these transmissions is based on the inviolability of the laws of quantum mechanics. Quantum cryptography was born in the early seventies when Steven Wiesner wrote \"Conjugate Coding\", which took more than ten years to end this paper. The quantum cryptography relies on two important elements of quantum mechanics - the Heisenberg Uncertainty principle and the principle of photon polarization. The Heisenberg Uncertainty principle states that, it is not possible to measure the quantum state of any system without distributing that system. The principle of photon polarization states that, an eavesdropper can not copy unknown qubits i.e. unknown quantum states, due to no-cloning theorem which was first presented by Wootters and Zurek in 1982. This research paper concentrates on the theory of  quantum cryptography, and how this technology contributes to the network security. This research paper summarizes the current state of quantum cryptography, and the real–world application implementation of this technology, and finally the future direction in which the quantum cryptography is headed forwards.", "title": "" }, { "docid": "dfa5334f77bba5b1eeb42390fed1bca3", "text": "Personality was studied as a conditioner of the effects of stressful life events on illness onset. Two groups of middle and upper level executives had comparably high degrees of stressful life events in the previous 3 years, as measured by the Holmes and Rahe Schedule of Recent Life Events. One group (n = 86) suffered high stress without falling ill, whereas the other (n = 75) reported becoming sick after their encounter with stressful life events. Illness was measured by the Wyler, Masuda, and Holmes Seriousness of Illness Survey. Discriminant function analysis, run on half of the subjects in each group and cross-validated on the remaining cases, supported the prediction that high stress/low illness executives show, by comparison with high stress/high illness executives, more hardiness, that is, have a stronger commitment to self, an attitude of vigorousness toward the environment, a sense of meaningfulness, and an internal locus of control.", "title": "" }, { "docid": "bf08d673b40109d6d6101947258684fd", "text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.", "title": "" }, { "docid": "f285815e47ea0613fb1ceb9b69aee7df", "text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.", "title": "" }, { "docid": "aa418cfd93eaba0d47084d0b94be69b8", "text": "Single-trial classification of Event-Related Potentials (ERPs) is needed in many real-world brain-computer interface (BCI) applications. However, because of individual differences, the classifier needs to be calibrated by using some labeled subject specific training samples, which may be inconvenient to obtain. In this paper we propose a weighted adaptation regularization (wAR) approach for offline BCI calibration, which uses data from other subjects to reduce the amount of labeled data required in offline single-trial classification of ERPs. Our proposed model explicitly handles class-imbalance problems which are common in many real-world BCI applications. War can improve the classification performance, given the same number of labeled subject-specific training samples, or, equivalently, it can reduce the number of labeled subject-specific training samples, given a desired classification accuracy. To reduce the computational cost of wAR, we also propose a source domain selection (SDS) approach. Our experiments show that wARSDS can achieve comparable performance with wAR but is much less computationally intensive. We expect wARSDS to find broad applications in offline BCI calibration.", "title": "" }, { "docid": "35b82263484452d83519c68a9dfb2778", "text": "S Music and the Moving Image Conference May 27th 29th, 2016 1. Loewe Friday, May 27, 2016, 9:30AM – 11:00AM MUSIC EDITING: PROCESS TO PRACTICE—BRIDGING THE VARIOUS PERSPECTIVES IN FILMMAKING AND STORY-TELLING Nancy Allen, Film Music Editor While the technical aspects of music editing and film-making continue to evolve, the fundamental nature of story-telling remains the same. Ideally, the role of the music editor exists at an intersection between the Composer, Director, and Picture Editor, where important creative decisions are made. This privileged position allows the Music Editor to better explore how to tell the story through music and bring the evolving vision of the film into tighter focus. 2. Loewe Friday, May 27, 2016, 11:30 AM – 1:00 PM GREAT EXPECTATIONS? THE CHANGING ROLE OF AUDIOVISUAL INCONGRUENCE IN CONTEMPORARY MULTIMEDIA Dave Ireland, University of Leeds Film-music moments that are perceived to be incongruent, misfitting or inappropriate have often been described as highly memorable. These claims can in part be explained by the separate processing of sonic and visual information that can occur when incongruent combinations subvert expectations of an audiovisual pairing in which the constituent components share a greater number of properties. Drawing upon a sequence from the TV sitcom Modern Family in which images of violent destruction are juxtaposed with performance of tranquil classical music, this paper highlights the increasing prevalence of such uses of audiovisual difference in contemporary multimedia. Indeed, such principles even now underlie a form of Internet meme entitled ‘Whilst I play unfitting music’. Such examples serve to emphasize the evolving functions of incongruence, emphasizing the ways in which such types of audiovisual pairing now also serve as a marker of authorial style and a source of intertextual parody. Drawing upon psychological theories of expectation and ideas from semiotics that facilitate consideration of the potential disjunction between authorial intent and perceiver response, this paper contends that such forms of incongruence should be approached from a psycho-semiotic perspective. Through consideration of the aforementioned examples, it will be demonstrated that this approach allows for: more holistic understanding of evolving expectations and attitudes towards audiovisual incongruence that may shape perceiver response; and a more nuanced mode of analyzing factors that may influence judgments of film-music fit and appropriateness. MUSICAL META-MORPHOSIS: BREAKING THE FOURTH WALL THROUGH DIEGETIC-IZING AND METACAESURA Rebecca Eaton, Texas State University In “The Fantastical Gap,” Stilwell suggests that metadiegetic music—which puts the audience “inside a character’s head”— begets such a strong spectator bond that it becomes “a kind of musical ‘direct address,’ threatening to break the fourth wall that is the screen.” While Stillwell theorizes a breaking of the fourth wall through audience over-identification, in this paper I define two means of film music transgression that potentially unsuture an audience, exposing film qua film: “diegetic-izing” and “metacaesura.” While these postmodern techniques 1) reveal film as a constructed artifact, and 2) thus render the spectator a more, not less, “troublesome viewing subject,” my analyses demonstrate that these breaches of convention still further the narrative aims of their respective films. Both Buhler and Stilwell analyze music that gradually dissolves from non-diegetic to diegetic. “Diegeticizing” unexpectedly reveals what was assumed to be nondiegetic as diegetic, subverting Gorbman’s first principle of invisibility. In parodies including Blazing Saddles and Spaceballs, this reflexive uncloaking plays for laughs. The Truman Show and the Hunger Games franchise skewer live soundtrack musicians and timpani—ergo, film music itself—as tools of emotional manipulation or propaganda. “Metacaesura” serves as another means of breaking the fourth wall. Metacaesura arises when non-diegetic music cuts off in media res. While diegeticizing renders film music visible, metacaesura renders it audible (if only in hindsight). In Honda’s “Responsible You,” Pleasantville, and The Truman Show, the dramatic cessation of nondiegetic music compels the audience to acknowledge the constructedness of both film and their own worlds. Partial Bibliography Brown, Tom. Breaking the Fourth Wall: Direct Address in the Cinema. Edinburgh: Edinburgh University Press, 2012. Buhler, James. “Analytical and Interpretive Approaches to Film Music (II): Interpreting Interactions of Music and Film.” In Film Music: An Anthology of Critical Essays, edited by K.J. Donnelly, 39-61. Edinburgh University Press, 2001. Buhler, James, Anahid Kassabian, David Neumeyer, and Robynn Stillwell. “Roundtable on Film Music.” Velvet Light Trap 51 (Spring 2003): 73-91. Buhler, James, Caryl Flinn, and David Neumeyer, eds. Music and Cinema. Hanover: Wesleyan/University Press of New England, 2000. Eaton, Rebecca M. Doran. “Unheard Minimalisms: The Function of the Minimalist Technique in Film Scores.” PhD diss., The University of Texas at Austin, 2008. Gorbman, Claudia. Unheard Melodies: Narrative Film Music. Bloomington: University of Indiana Press, 1987. Harries, Dan. Film Parody. London: British Film Institute, 2000. Kassabian, Anahid. Hearing Film: Tracking Identifications in Contemporary Hollywood Film Music. New York: Routledge, 2001. Neumeyer, David. “Diegetic/nondiegetic: A Theoretical Model.” Music and the Moving Image 2.1 (2009): 26–39. Stilwell, Robynn J. “The Fantastical Gap Between Diegetic and Nondiegetic.” In Beyond the Soundtrack, edited by Daniel Goldmark, Lawrence Kramer, and Richard Leppert, 184202. Berkeley: The University of California Press, 2007. REDEFINING PERSPECTIVE IN ATONEMENT: HOW MUSIC SET THE STAGE FOR MODERN MEDIA CONSUMPTION Lillie McDonough, New York University One of the most striking narrative devices in Joe Wright’s film adaptation of Atonement (2007) is in the way Dario Marianelli’s original score dissolves the boundaries between diagetic and non-diagetic music at key moments in the drama. I argue that these moments carry us into a liminal state where the viewer is simultaneously in the shoes of a first person character in the world of the film and in the shoes of a third person viewer aware of the underscore as a hallmark of the fiction of a film in the first place. This reflects the experience of Briony recalling the story, both as participant and narrator, at the metalevel of the audience. The way the score renegotiates the customary musical playing space creates a meta-narrative that resembles one of the fastest growing forms of digital media of today: videogames. At their core, video games work by placing the player in a liminal state of both a viewer who watches the story unfold and an agent who actively takes part in the story’s creation. In fact, the growing trend towards hyperrealism and virtual reality intentionally progressively erodes the boundaries between the first person agent in real the world and agent on screen in the digital world. Viewed through this lens, the philosophy behind the experience of Atonement’s score and sound design appears to set the stage for way our consumption of media has developed since Atonement’s release in 2007. Mainly, it foreshadows and highlights a prevalent desire to progressively blur the lines between media and life. 3. Room 303, Friday, May 27, 2016, 11:30 AM – 1:00 PM HOLLYWOOD ORCHESTRATORS AND GHOSTWRITERS OF THE 1960s AND 1970s: THE CASE OF MOACIR SANTOS Lucas Bonetti, State University of Campinas In Hollywood in the 1960s and 1970s, freelance film composers trying to break into the market saw ghostwriting as opportunities to their professional networks. Meanwhile, more renowned composers saw freelancers as means of easing their work burdens. The phenomenon was so widespread that freelancers even sometimes found themselves ghostwriting for other ghostwriters. Ghostwriting had its limitations, though: because freelancers did not receive credit, they could not grow their resumes. Moreover, their music often had to follow such strict guidelines that they were not able to showcase their own compositional voices. Being an orchestrator raised fewer questions about authorship, and orchestrators usually did not receive credit for their work. Typically, composers provided orchestrators with detailed sketches, thereby limiting their creative possibilities. This story would suggest that orchestrators were barely more than copyists—though with more intense workloads. This kind of thankless work was especially common in scoring for episodic television series of the era, where the fast pace of the industry demanded more agility and productivity. Brazilian composer Moacir Santos worked as a Hollywood ghostwriter and orchestrator starting in 1968. His experiences exemplify the difficulties of these professions during this era. In this paper I draw on an interview-based research I conducted in the Los Angeles area to show how Santos’s experiences showcase the difficulties of being a Hollywood outsider at the time. In particular, I examine testimony about racial prejudice experienced by Santos, and how misinformation about his ghostwriting activity has led to misunderstandings among scholars about his contributions. SING A SONG!: CHARITY BAILEY AND INTERRACIAL MUSIC EDUCATION ON 1950s NYC TELEVISION Melinda Russell, Carleton College Rhode Island native Charity Bailey (1904-1978) helped to define a children’s music market in print and recordings; in each instance the contents and forms she developed are still central to American children’s musical culture and practice. After study at Juilliard and Dalcroze, Bailey taught music at the Little Red School House in Greenwich Village from 1943-1954, where her students included Mary Travers and Eric Weissberg. Bailey’s focus on African, African-American, and Car", "title": "" }, { "docid": "bdfb48fcd7ef03d913a41ca8392552b6", "text": "Recent advance of large scale similarity search involves using deeply learned representations to improve the search accuracy and use vector quantization methods to increase the search speed. However, how to learn deep representations that strongly preserve similarities between data pairs and can be accurately quantized via vector quantization remains a challenging task. Existing methods simply leverage quantization loss and similarity loss, which result in unexpectedly biased back-propagating gradients and affect the search performances. To this end, we propose a novel gradient snapping layer (GSL) to directly regularize the back-propagating gradient towards a neighboring codeword, the generated gradients are un-biased for reducing similarity loss and also propel the learned representations to be accurately quantized. Joint deep representation and vector quantization learning can be easily performed by alternatively optimize the quantization codebook and the deep neural network. The proposed framework is compatible with various existing vector quantization approaches. Experimental results demonstrate that the proposed framework is effective, flexible and outperforms the state-of-the-art large scale similarity search methods.", "title": "" }, { "docid": "dd51e9bed7bbd681657e8742bb5bf280", "text": "Automated negotiation systems with self interested agents are becoming increas ingly important One reason for this is the technology push of a growing standardized communication infrastructure Internet WWW NII EDI KQML FIPA Concor dia Voyager Odyssey Telescript Java etc over which separately designed agents belonging to di erent organizations can interact in an open environment in real time and safely carry out transactions The second reason is strong application pull for computer support for negotiation at the operative decision making level For example we are witnessing the advent of small transaction electronic commerce on the Internet for purchasing goods information and communication bandwidth There is also an industrial trend toward virtual enterprises dynamic alliances of small agile enterprises which together can take advantage of economies of scale when available e g respond to more diverse orders than individual agents can but do not su er from diseconomies of scale Multiagent technology facilitates such negotiation at the operative decision mak ing level This automation can save labor time of human negotiators but in addi tion other savings are possible because computational agents can be more e ective at nding bene cial short term contracts than humans are in strategically and com binatorially complex settings This chapter discusses multiagent negotiation in situations where agents may have di erent goals and each agent is trying to maximize its own good without concern for the global good Such self interest naturally prevails in negotiations among independent businesses or individuals In building computer support for negotiation in such settings the issue of self interest has to be dealt with In cooperative distributed problem solving the system designer imposes an interaction protocol and a strategy a mapping from state history to action a", "title": "" }, { "docid": "ed0d2151f5f20a233ed8f1051bc2b56c", "text": "This paper discloses development and evaluation of die attach material using base metals (Cu and Sn) by three different type of composite. Mixing them into paste or sheet shape for die attach, we have confirmed that one of Sn-Cu components having IMC network near its surface has major role to provide robust interconnect especially for high temperature applications beyond 200°C after sintering.", "title": "" }, { "docid": "852c85ecbed639ea0bfe439f69fff337", "text": "In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables. These two quantities are complementary and are jointly applied to information behavior analysis in most cases. The uncertainty property in information asserts a fundamental trade-off between Fisher information and Shannon information, which enlightens us the relationship between the encoder and the decoder in variational auto-encoders (VAEs). In this paper, we investigate VAEs in the FisherShannon plane, and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities. Through extensive qualitative and quantitative experiments, we provide with a better comprehension of VAEs in tasks such as highresolution reconstruction, and representation learning in the perspective of Fisher information and Shannon information. We further propose a variant of VAEs, termed as Fisher auto-encoder (FAE), for practical needs to balance Fisher information and Shannon information. Our experimental results have demonstrated its promise in improving the reconstruction accuracy and avoiding the non-informative latent code as occurred in previous works.", "title": "" }, { "docid": "30db2040ab00fd5eec7b1eb08526f8e8", "text": "We formulate an equivalence between machine learning and the formulation of statistical data assimilation as used widely in physical and biological sciences. The correspondence is that layer number in a feedforward artificial network setting is the analog of time in the data assimilation setting. This connection has been noted in the machine learning literature. We add a perspective that expands on how methods from statistical physics and aspects of Lagrangian and Hamiltonian dynamics play a role in how networks can be trained and designed. Within the discussion of this equivalence, we show that adding more layers (making the network deeper) is analogous to adding temporal resolution in a data assimilation framework. Extending this equivalence to recurrent networks is also discussed. We explore how one can find a candidate for the global minimum of the cost functions in the machine learning context using a method from data assimilation. Calculations on simple models from both sides of the equivalence are reported. Also discussed is a framework in which the time or layer label is taken to be continuous, providing a differential equation, the Euler-Lagrange equation and its boundary conditions, as a necessary condition for a minimum of the cost function. This shows that the problem being solved is a two-point boundary value problem familiar in the discussion of variational methods. The use of continuous layers is denoted “deepest learning.” These problems respect a symplectic symmetry in continuous layer phase space. Both Lagrangian versions and Hamiltonian versions of these problems are presented. Their well-studied implementation in a discrete time/layer, while respecting the symplectic structure, is addressed. The Hamiltonian version provides a direct rationale for backpropagation as a solution method for a certain two-point boundary value problem.", "title": "" }, { "docid": "19f604732dd88b01e1eefea1f995cd54", "text": "Power electronic transformer (PET) technology is one of the promising technology for medium/high power conversion systems. With the cutting-edge improvements in the power electronics and magnetics, makes it possible to substitute conventional line frequency transformer traction (LFTT) technology with the PET technology. Over the past years, research and field trial studies are conducted to explore the technical challenges associated with the operation, functionalities, and control of PET-based traction systems. This paper aims to review the essential requirements, technical challenges, and the existing state of the art of PET traction system architectures. Finally, this paper discusses technical considerations and introduces the new research possibilities especially in the power conversion stages, PET design, and the power switching devices.", "title": "" }, { "docid": "d9950f75380758d0a0f4fd9d6e885dfd", "text": "In recent decades, the interactive whiteboard (IWB) has become a relatively common educational tool in Western schools. The IWB is essentially a large touch screen, that enables the user to interact with digital content in ways that are not possible with an ordinary computer-projector-canvas setup. However, the unique possibilities of IWBs are rarely leveraged to enhance teaching and learning beyond the primary school level. This is particularly noticeable in high school physics. We describe how a high school physics teacher learned to use an IWB in a new way, how she planned and implemented a lesson on the topic of orbital motion of planets, and what tensions arose in the process. We used an ethnographic approach to account for the teacher’s and involved students’ perspectives throughout the process of teacher preparation, lesson planning, and the implementation of the lesson. To interpret the data, we used the conceptual framework of activity theory. We found that an entrenched culture of traditional white/blackboard use in physics instruction interferes with more technologically innovative and more student-centered instructional approaches that leverage the IWB’s unique instructional potential. Furthermore, we found that the teacher’s confidence in the mastery of the IWB plays a crucial role in the teacher’s willingness to transfer agency within the lesson to the students.", "title": "" }, { "docid": "b1e2326ebdf729e5b55822a614b289a9", "text": "The work presented in this paper is targeted at the first phase of the test and measurements product life cycle, namely standardisation. During this initial phase of any product, the emphasis is on the development of standards that support new technologies while leaving the scope of implementations as open as possible. To allow the engineer to freely create and invent tools that can quickly help him simulate or emulate his ideas are paramount. Within this scope, a traffic generation system has been developed for IEC 61850 Sampled Values which will help in the evaluation of the data models, data acquisition, data fusion, data integration and data distribution between the various devices and components that use this complex set of evolving standards in Smart Grid systems.", "title": "" }, { "docid": "4a72f9b04ba1515c0d01df0bc9b60ed7", "text": "Distributed generators (DGs) sometimes provide the lowest cost solution to handling low-voltage or overload problems. In conjunction with handling such problems, a DG can be placed for optimum efficiency or optimum reliability. Such optimum placements of DGs are investigated. The concept of segments, which has been applied in previous reliability studies, is used in the DG placement. The optimum locations are sought for time-varying load patterns. It is shown that the circuit reliability is a function of the loading level. The difference of DG placement between optimum efficiency and optimum reliability varies under different load conditions. Observations and recommendations concerning DG placement for optimum reliability and efficiency are provided in this paper. Economic considerations are also addressed.", "title": "" }, { "docid": "91bf842f809dd369644ffd2b10b9c099", "text": "We tackle the problem of multi-label classification of fashion images, learning from noisy data with minimal human supervision. We present a new dataset of full body poses, each with a set of 66 binary labels corresponding to the information about the garments worn in the image obtained in an automatic manner. As the automatically-collected labels contain significant noise, we manually correct the labels for a small subset of the data, and use these correct labels for further training and evaluation. We build upon a recent approach that both cleans the noisy labels and learns to classify, and introduce simple changes that can significantly improve the performance.", "title": "" }, { "docid": "4fea653dd0dd8cb4ac941b2368ceb78f", "text": "During present study the antibacterial activity of black pepper (Piper nigrum Linn.) and its mode of action on bacteria were done. The extracts of black pepper were evaluated for antibacterial activity by disc diffusion method. The minimum inhibitory concentration (MIC) was determined by tube dilution method and mode of action was studied on membrane leakage of UV260 and UV280 absorbing material spectrophotometrically. The diameter of the zone of inhibition against various Gram positive and Gram negative bacteria was measured. The MIC was found to be 50-500ppm. Black pepper altered the membrane permeability resulting the leakage of the UV260 and UV280 absorbing material i.e., nucleic acids and proteins into the extra cellular medium. The results indicate excellent inhibition on the growth of Gram positive bacteria like Staphylococcus aureus, followed by Bacillus cereus and Streptococcus faecalis. Among the Gram negative bacteria Pseudomonas aeruginosa was more susceptible followed by Salmonella typhi and Escherichia coli.", "title": "" }, { "docid": "e812bed02753b807d1e03a2e05e87cb8", "text": "ion level. It is especially useful in the case of expert-based estimation, where it is easier for experts to embrace and estimate smaller pieces of project work. Moreover, the increased level of detail during estimation—for instance, by breaking down software products and processes—implies higher transparency of estimates. In practice, there is a good chance that the bottom estimates would be mixed below and above the actual effort. As a consequence, estimation errors at the bottom level will cancel each other out, resulting in smaller estimation error than if a top-down approach were used. This phenomenon is related to the mathematical law of large numbers. However, the more granular the individual estimates, the more time-consuming the overall estimation process becomes. In industrial practice, a top-down strategy usually provides reasonably accurate estimates at relatively low overhead and without too much technical expertise. Although bottom-up estimation usually provides more accurate estimates, it requires the estimators involved to have expertise regarding the bottom activities and related product components that they estimate directly. In principle, applying bottom-up estimation pays off when the decomposed tasks can be estimated more accurately than the whole task. For instance, a bottom-up strategy proved to provide better results when applied to high-uncertainty or complex estimation tasks, which are usually underestimated when considered as a whole. Furthermore, it is often easy to forget activities and/or underestimate the degree of unexpected events, which leads to underestimation of total effort. However, from the mathematical point of view (law of large numbers mentioned), dividing the project into smaller work packages provides better data for estimation and reduces overall estimation error. Experiences presented by Jørgensen (2004b) suggest that in the context of expert-based estimation, software companies should apply a bottom-up strategy unless the estimators have experience from, or access to, very similar projects. In the context of estimation based on human judgment, typical threats of individual and group estimation should be considered. Refer to Sect. 6.4 for an overview of the strengths and weaknesses of estimation based on human judgment.", "title": "" }, { "docid": "17611b0521b69ad2b22eeadc10d6d793", "text": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%.In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.", "title": "" } ]
scidocsrr
ddbde03fe2445a7daad4ba7f9c09aec8
LBANN: livermore big artificial neural network HPC toolkit
[ { "docid": "091279f6b95594f9418591264d0d7e3c", "text": "A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several offthe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the effect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the effect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6% and 97.2% respectively). Appearing in Proceedings of the 14 International Conference on Artificial Intelligence and Statistics (AISTATS) 2011, Fort Lauderdale, FL, USA. Volume 15 of JMLR: W&CP 15. Copyright 2011 by the authors.", "title": "" } ]
[ { "docid": "1d7035cc5b85e13be6ff932d39740904", "text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor", "title": "" }, { "docid": "1dbaa72cd95c32d1894750357e300529", "text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.", "title": "" }, { "docid": "738555e605ee2b90ff99bef6d434162d", "text": "In this paper we present two deep-learning systems that competed at SemEval-2017 Task 4 “Sentiment Analysis in Twitter”. We participated in all subtasks for English tweets, involving message-level and topic-based sentiment polarity classification and quantification. We use Long Short-Term Memory (LSTM) networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages. Also, we present a text processing tool suitable for social network messages, which performs tokenization, word normalization, segmentation and spell correction. Moreover, our approach uses no hand-crafted features or sentiment lexicons. We ranked 1st (tie) in Subtask A, and achieved very competitive results in the rest of the Subtasks. Both the word embeddings and our text processing tool1 are available to the research community.", "title": "" }, { "docid": "3a6f2d4fa9531d9bc8c2dbf2110990f3", "text": "In a Grid Connected Photo-voltaic System (GCPVS) maximum power is to be drawn from the PV array and has to be injected into the Grid, using suitable maximum power point tracking algorithms, converter topologies and control algorithms. Usually converter topologies such as buck, boost, buck-boost, sepic, flyback, push pull etc. are used. Loss factors such as irradiance, temperature, shading effects etc. have zero loss in a two stage system, but additional converter used will lead to an extra loss which makes the single stage system more efficient when compared to a two stage systems, in applications like standalone and grid connected renewable energy systems. In Cuk converter the source and load side are separated via a capacitor thus energy transfer from the source side to load side occurs through this capacitor which leads to less current ripples at the load side. Thus in this paper, a Simulink model of two stage GCPVS using Cuk converter is being designed, simulated and is compared with a GCPVS using Boost Converter. For tracking the maximum power point the most common and accurate method called incremental conductance algorithm is used. And the inverter control is done using the dc bus voltage algorithm.", "title": "" }, { "docid": "1e7f14531caad40797594f9e4c188697", "text": "The Drosophila melanogaster germ plasm has become the paradigm for understanding both the assembly of a specific cytoplasmic localization during oogenesis and its function. The posterior ooplasm is necessary and sufficient for the induction of germ cells. For its assembly, localization of gurken mRNA and its translation at the posterior pole of early oogenic stages is essential for establishing the posterior pole of the oocyte. Subsequently, oskar mRNA becomes localized to the posterior pole where its translation leads to the assembly of a functional germ plasm. Many gene products are required for producing the posterior polar plasm, but only oskar, tudor, valois, germcell-less and some noncoding RNAs are required for germ cell formation. A key feature of germ cell formation is the precocious segregation of germ cells, which isolates the primordial germ cells from mRNA turnover, new transcription, and continued cell division. nanos is critical for maintaining the transcription quiescent state and it is required to prevent transcription of Sex-lethal in pole cells. In spite of the large body of information about the formation and function of the Drosophila germ plasm, we still do not know what specifically is required to cause the pole cells to be germ cells. A series of unanswered problems is discussed in this chapter.", "title": "" }, { "docid": "fc172716fe01852d53d0ae5d477f3afc", "text": "Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods.", "title": "" }, { "docid": "2da528dcbf7a97875e0a5a1a79cbaa21", "text": "Convolutional neural net-like structures arise from training an unstructured deep belief network (DBN) using structured simulation data of 2-D Ising Models at criticality. The convolutional structure arises not just because such a structure is optimal for the task, but also because the belief network automatically engages in block renormalization procedures to “rescale” or “encode” the input, a fundamental approach in statistical mechanics. This work primarily reviews the work of Mehta et al. [1], the group that first made the discovery that such a phenomenon occurs, and replicates their results training a DBN on Ising models, confirming that weights in the DBN become spatially concentrated during training on critical Ising samples.", "title": "" }, { "docid": "6f9bca88fbb59e204dd8d4ae2548bd2d", "text": "As the biomechanical literature concerning softball pitching is evolving, there are no data to support the mechanics of softball position players. Pitching literature supports the whole kinetic chain approach including the lower extremity in proper throwing mechanics. The purpose of this project was to examine the gluteal muscle group activation patterns and their relationship with shoulder and elbow kinematics and kinetics during the overhead throwing motion of softball position players. Eighteen Division I National Collegiate Athletic Association softball players (19.2 ± 1.0 years; 68.9 ± 8.7 kg; 168.6 ± 6.6 cm) who were listed on the active playing roster volunteered. Electromyographic, kinematic, and kinetic data were collected while players caught a simulated hit or pitched ball and perform their position throw. Pearson correlation revealed a significant negative correlation between non-throwing gluteus maximus during the phase of maximum external rotation to maximum internal rotation (MIR) and elbow moments at ball release (r = −0.52). While at ball release, trunk flexion and rotation both had a positive relationship with shoulder moments at MIR (r = 0.69, r = 0.82, respectively) suggesting that the kinematic actions of the pelvis and trunk are strongly related to the actions of the shoulder during throwing.", "title": "" }, { "docid": "a6b65ee65eea7708b4d25fb30444c8e6", "text": "The Intelligent vehicle is experiencing revolutionary growth in research and industry, but it still suffers from a lot of security vulnerabilities. Traditional security methods are incapable of providing secure IV, mainly in terms of communication. In IV communication, major issues are trust and data accuracy of received and broadcasted reliable data in the communication channel. Blockchain technology works for the cryptocurrency, Bitcoin which has been recently used to build trust and reliability in peer-to-peer networks with similar topologies to IV Communication world. IV to IV, communicate in a decentralized manner within communication networks. In this paper, we have proposed, Trust Bit (TB) for IV communication among IVs using Blockchain technology. Our proposed trust bit provides surety for each IVs broadcasted data, to be secure and reliable in every particular networks. Our Trust Bit is a symbol of trustworthiness of vehicles behavior, and vehicles legal and illegal action. Our proposal also includes a reward system, which can exchange some TB among IVs, during successful communication. For the data management of this trust bit, we have used blockchain technology in the vehicular cloud, which can store all Trust bit details and can be accessed by IV anywhere and anytime. Our proposal provides secure and reliable information. We evaluate our proposal with the help of IV communication on intersection use case which analyzes a variety of trustworthiness between IVs during communication.", "title": "" }, { "docid": "b1b56020802d11d1f5b2badb177b06b9", "text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.", "title": "" }, { "docid": "cdfec1296a168318f773bb7ef0bfb307", "text": "Today service markets are becoming business reality as for example Amazon's EC2 spot market. However, current research focusses on simplified consumer-provider service markets only. Taxes are an important market element which has not been considered yet for service markets. This paper introduces and evaluates the effects of tax systems for IaaS markets which trade virtual machines. As a digital good with well defined characteristics like storage or processing power a virtual machine can be taxed by the tax authority using different tax systems. Currently the value added tax is widely used for taxing virtual machines only. The main contribution of the paper is the so called CloudTax component, a framework to simulate and evaluate different tax systems on service markets. It allows to introduce economical principles and phenomenons like the Laffer Curve or tax incidences. The CloudTax component is based on the CloudSim simulation framework using the Bazaar-Extension for comprehensive economic simulations. We show that tax mechanisms strongly influence the efficiency of negotiation processes in the Cloud market.", "title": "" }, { "docid": "73f8a5e5e162cc9b1ed45e13a06e78a5", "text": "Two major projects in the U.S. and Europe have joined in a collaboration to work toward achieving interoperability among language resources. In the U.S., the project, Sustainable Interoperability for Language Technology (SILT) has been funded by the National Science Foundation under the INTEROP program, and in Europe, FLaReNet, Fostering Language Resources Network, has been funded by the European Commission under the eContentPlus framework. This international collaborative effort involves members of the language processing community and others working in related areas to build consensus regarding the sharing of data and technologies for language resources and applications, to work towards interoperability of existing data, and, where possible, to promote standards for annotation and resource building. This paper focuses on the results of a recent workshop whose goal was to arrive at operational definitions for interoperability over four thematic areas, including metadata for describing language resources, data categories and their semantics, resource publication requirements, and software sharing.", "title": "" }, { "docid": "70593bbda6c88f0ac10e26768d74b3cd", "text": "Type 2 diabetes mellitus (T2DM) is a chronic disease that o‰en results in multiple complications. Risk prediction and pro€ling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications a‰er the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Speci€cally, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the di‚erent risk factors, and (3) between the risk factor selection paŠerns. Œe method uses coecient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. Œe proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the di‚erent risks and risk factors are also identi€ed. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a signi€cant margin. Furthermore, we show that the risk associations learned and the risk factors identi€ed lead to meaningful clinical insights. CCS CONCEPTS •Information systems→ Data mining; •Applied computing → Health informatics;", "title": "" }, { "docid": "c3ef6598f869e40fc399c89baf0dffd8", "text": "In this article, a novel hybrid genetic algorithm is proposed. The selection operator, crossover operator and mutation operator of the genetic algorithm have effectively been improved according to features of Sudoku puzzles. The improved selection operator has impaired the similarity of the selected chromosome and optimal chromosome in the current population such that the chromosome with more abundant genes is more likely to participate in crossover; such a designed crossover operator has possessed dual effects of self-experience and population experience based on the concept of tactfully combining PSO, thereby making the whole iterative process highly directional; crossover probability is a random number and mutation probability changes along with the fitness value of the optimal solution in the current population such that more possibilities of crossover and mutation could then be considered during the algorithm iteration. The simulation results show that the convergence rate and stability of the novel algorithm has significantly been improved.", "title": "" }, { "docid": "8222f8eae81c954e8e923cbd883f8322", "text": "Work stealing is a promising approach to constructing multithreaded program runtimes of parallel programming languages. This paper presents HERMES, an energy-efficient work-stealing language runtime. The key insight is that threads in a work-stealing environment -- thieves and victims - have varying impacts on the overall program running time, and a coordination of their execution \"tempo\" can lead to energy efficiency with minimal performance loss. The centerpiece of HERMES is two complementary algorithms to coordinate thread tempo: the workpath-sensitive algorithm determines tempo for each thread based on thief-victim relationships on the execution path, whereas the workload-sensitive algorithm selects appropriate tempo based on the size of work-stealing deques. We construct HERMES on top of Intel Cilk Plus's runtime, and implement tempo adjustment through standard Dynamic Voltage and Frequency Scaling (DVFS). Benchmarks running on HERMES demonstrate an average of 11-12% energy savings with an average of 3-4% performance loss through meter-based measurements over commercial CPUs.", "title": "" }, { "docid": "26886ff5cb6301dd960e79d8fb3f9362", "text": "We propose a preprocessing method to improve the performance of Principal Component Analysis (PCA) for classification problems composed of two steps; in the first step, the weight of each feature is calculated by using a feature weighting method. Then the features with weights larger than a predefined threshold are selected. The selected relevant features are then subject to the second step. In the second step, variances of features are changed until the variances of the features are corresponded to their importance. By taking the advantage of step 2 to reveal the class structure, we expect that the performance of PCA increases in classification problems. Results confirm the effectiveness of our proposed methods.", "title": "" }, { "docid": "21a45086509bd0edb1b578a8a904bf50", "text": "Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.", "title": "" }, { "docid": "063389c654f44f34418292818fc781e7", "text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.", "title": "" }, { "docid": "c760e6db820733dc3f57306eef81e5c9", "text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.", "title": "" }, { "docid": "2c0770b42050c4d67bfc7e723777baa6", "text": "We describe a framework for understanding how age-related changes in adult development affect work motivation, and, building on recent life-span theories and research on cognitive abilities, personality, affect, vocational interests, values, and self-concept, identify four intraindividual change trajectories (loss, gain, reorganization, and exchange). We discuss implications of the integrative framework for the use and effectiveness of different motivational strategies with midlife and older workers in a variety of jobs, as well as abiding issues and future research directions.", "title": "" } ]
scidocsrr
e61322adaf96eaa05e3ccd3121049e27
Fitness Gamification : Concepts , Characteristics , and Applications
[ { "docid": "0c7afb3bee6dd12e4a69632fbdb50ce8", "text": "OBJECTIVES\nTo systematically review levels of metabolic expenditure and changes in activity patterns associated with active video game (AVG) play in children and to provide directions for future research efforts.\n\n\nDATA SOURCES\nA review of the English-language literature (January 1, 1998, to January 1, 2010) via ISI Web of Knowledge, PubMed, and Scholars Portal using the following keywords: video game, exergame, physical activity, fitness, exercise, energy metabolism, energy expenditure, heart rate, disability, injury, musculoskeletal, enjoyment, adherence, and motivation.\n\n\nSTUDY SELECTION\nOnly studies involving youth (< or = 21 years) and reporting measures of energy expenditure, activity patterns, physiological risks and benefits, and enjoyment and motivation associated with mainstream AVGs were included. Eighteen studies met the inclusion criteria. Articles were reviewed and data were extracted and synthesized by 2 independent reviewers. MAIN OUTCOME EXPOSURES: Energy expenditure during AVG play compared with rest (12 studies) and activity associated with AVG exposure (6 studies).\n\n\nMAIN OUTCOME MEASURES\nPercentage increase in energy expenditure and heart rate (from rest).\n\n\nRESULTS\nActivity levels during AVG play were highly variable, with mean (SD) percentage increases of 222% (100%) in energy expenditure and 64% (20%) in heart rate. Energy expenditure was significantly lower for games played primarily through upper body movements compared with those that engaged the lower body (difference, -148%; 95% confidence interval, -231% to -66%; P = .001).\n\n\nCONCLUSIONS\nThe AVGs enable light to moderate physical activity. Limited evidence is available to draw conclusions on the long-term efficacy of AVGs for physical activity promotion.", "title": "" }, { "docid": "5e7a06213a32e0265dcb8bc11a5bb3f1", "text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.", "title": "" } ]
[ { "docid": "eb8321467458401aa86398390c32ae00", "text": "As the wide popularization of online social networks, online users are not content only with keeping online friendship with social friends in real life any more. They hope the system designers can help them exploring new friends with common interest. However, the large amount of online users and their diverse and dynamic interests possess great challenges to support such a novel feature in online social networks. In this paper, by leveraging interest-based features, we design a general friend recommendation framework, which can characterize user interest in two dimensions: context (location, time) and content, as well as combining domain knowledge to improve recommending quality. We also design a potential friend recommender system in a real online social network of biology field to show the effectiveness of our proposed framework.", "title": "" }, { "docid": "ac222a5f8784d7a5563939077c61deaa", "text": "Cyber-Physical Systems (CPS) are integrations of computation with physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. In the physical world, the passage of time is inexorable and concurrency is intrinsic. Neither of these properties is present in today’s computing and networking abstractions. I argue that the mismatch between these abstractions and properties of physical processes impede technical progress, and I identify promising technologies for research and investment. There are technical approaches that partially bridge the abstraction gap today (such as real-time operating systems, middleware technologies, specialized embedded processor architectures, and specialized networks), and there is certainly considerable room for improvement of these technologies. However, it may be that we need a less incremental approach, where new abstractions are built from the ground up. The foundations of computing are built on the premise that the principal task of computers is transformation of data. Yet we know that the technology is capable of far richer interactions the physical world. I critically examine the foundations that have been built over the last several decades, and determine where the technology and theory bottlenecks and opportunities lie. I argue for a new systems science that is jointly physical and computational.", "title": "" }, { "docid": "4d9f0cf629cd3695a2ec249b81336d28", "text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.", "title": "" }, { "docid": "4ee5931bf57096913f7e13e5da0fbe7e", "text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.", "title": "" }, { "docid": "8a224bf0376321caa30a95318ec9ecf9", "text": "With the rapid development of very large scale integration (VLSI) and continuous scaling in the metal oxide semiconductor field effect transistor (MOSFET), pad corrosion in the aluminum (Al) pad surface has become practical concern in the semiconductor industry. This paper presents a new method to improve the pad corrosion on Al pad surface by using new Al/Ti/TiN film stack. The effects of different Al film stacks on the Al pad corrosion have been investigated. The experiment results show that the Al/Ti/TiN film stack could improve bond pad corrosion effectively comparing to Al/SiON film stack. Wafers processed with new Al film stack were stored up to 28 days and display no pad crystal (PDCY) defects on bond pad surfaces.", "title": "" }, { "docid": "f073abd94a9c5853e561439de35ac9bd", "text": "Evolutionary learning is one of the most popular techniques for designing quantitative investment (QI) products. Trend following (TF) strategies, owing to their briefness and efficiency, are widely accepted by investors. Surprisingly, to the best of our knowledge, no related research has investigated TF investment strategies within an evolutionary learning model. This paper proposes a hybrid long-term and short-term evolutionary trend following algorithm (eTrend) that combines TF investment strategies with the eXtended Classifier Systems (XCS). The proposed eTrend algorithm has two advantages: (1) the combination of stock investment strategies (i.e., TF) and evolutionary learning (i.e., XCS) can significantly improve computation effectiveness and model practicability, and (2) XCS can automatically adapt to market directions and uncover reasonable and understandable trading rules for further analysis, which can help avoid the irrational trading behaviors of common investors. To evaluate eTrend, experiments are carried out using the daily trading data stream of three famous indexes in the Shanghai Stock Exchange. Experimental results indicate that eTrend outperforms the buy-and-hold strategy with high Sortino ratio after the transaction cost. Its performance is also superior to the decision tree and artificial neural network trading models. Furthermore, as the concept drift phenomenon is common in the stock market, an exploratory concept drift analysis is conducted on the trading rules discovered in bear and bull market phases. The analysis revealed interesting and rational results. In conclusion, this paper presents convincing evidence that the proposed hybrid trend following model can indeed generate effective trading guid-", "title": "" }, { "docid": "0e068a4e7388ed456de4239326eb9b08", "text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.", "title": "" }, { "docid": "52d3d3bf1f29e254cbb89c64f3b0d6b5", "text": "Large projects are increasingly adopting agile development practices, and this raises new challenges for research. The workshop on principles of large-scale agile development focused on central topics in large-scale: the role of architecture, inter-team coordination, portfolio management and scaling agile practices. We propose eight principles for large-scale agile development, and present a revised research agenda.", "title": "" }, { "docid": "748eae887bcda0695cbcf1ba1141dd79", "text": "A wideband bandpass filter (BPF) with reconfigurable bandwidth (BW) is proposed based on a parallel-coupled line structure and a cross-shaped resonator with open stubs. The p-i-n diodes are used as the tuning elements, which can implement three reconfigurable BW states. The prototype of the designed filter reports an absolute BW tuning range of 1.22 GHz, while the fractional BW is varied from 34.8% to 56.5% when centered at 5.7 GHz. The simulation and measured results are in good agreement. Comparing with previous works, the proposed reconfigurable BPF features wider BW tuning range with maximum number of tuning states.", "title": "" }, { "docid": "393711bcd1a8666210e125fb4295e158", "text": "The purpose of a Beyond 4G (B4G) radio access technology, is to cope with the expected exponential increase of mobile data traffic in local area (LA). The requirements related to physical layer control signaling latencies and to hybrid ARQ (HARQ) round trip time (RTT) are in the order of ~1ms. In this paper, we propose a flexible orthogonal frequency division multiplexing (OFDM) based time division duplex (TDD) physical subframe structure optimized for B4G LA environment. We show that the proposed optimizations allow very frequent link direction switching, thus reaching the tight B4G HARQ RTT requirement and significant control signaling latency reductions compared to existing LTE-Advanced and WiMAX technologies.", "title": "" }, { "docid": "310f13dac8d7cf2d1b40878ef6ce051b", "text": "Traffic Accidents are occurring due to development of automobile industry and the accidents are unavoidable even the traffic rules are very strictly maintained. Data mining algorithm is applied to model the traffic accident injury level by using traffic accident dataset. It helped by obtaining the characteristics of drivers behavior, road condition and weather condition, Accident severity that are connected with different injury severities and death. This paper presents some models to predict the severity of injury using some data mining algorithms. The study focused on collecting the real data from previous research and obtains the injury severity level of traffic accident data.", "title": "" }, { "docid": "ea05a43abee762d4b484b5027e02a03a", "text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.", "title": "" }, { "docid": "55fc836c8b0f10486aa6d969d0cae14d", "text": "In this manuscript we explore the ways in which the marketplace metaphor resonates with online dating participants and how this conceptual framework influences how they assess themselves, assess others, and make decisions about whom to pursue. Taking a metaphor approach enables us to highlight the ways in which participants’ language shapes their self-concept and interactions with potential partners. Qualitative analysis of in-depth interviews with 34 participants from a large online dating site revealed that the marketplace metaphor was salient for participants, who employed several strategies that reflected the assumptions underlying the marketplace perspective (including resisting the metaphor). We explore the implications of this metaphor for romantic relationship development, such as the objectification of potential partners. Journal of Social and Personal Relationships © The Author(s), 2010. Reprints and permissions: sagepub.co.uk/journalsPermissions.nav, Vol. 27(4): 427–447. DOI: 10.1177/0265407510361614 This research was funded by Affirmative Action Grant 111579 from the Office of Research and Sponsored Programs at California State University, Stanislaus. An earlier version of this paper was presented at the International Communication Association, 2005. We would like to thank Jack Bratich, Art Ramirez, Lamar Reinsch, Jeanine Turner, and three anonymous reviewers for their helpful comments. All correspondence concerning this article should be addressed to Rebecca D. Heino, Georgetown University, McDonough School of Business, Washington D.C. 20057, USA [e-mail: [email protected]]. Larry Erbert was the Action Editor on this article. at MICHIGAN STATE UNIV LIBRARIES on June 9, 2010 http://spr.sagepub.com Downloaded from", "title": "" }, { "docid": "2804384964bc8996e6574bdf67ed9cb5", "text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.", "title": "" }, { "docid": "5c38ad54e43b71ea5588418620bcf086", "text": "Chondrosarcomas are indolent but invasive chondroid malignancies that can form in the skull base. Standard management of chondrosarcoma involves surgical resection and adjuvant radiation therapy. This review evaluates evidence from the literature to assess the importance of the surgical approach and extent of resection on outcomes for patients with skull base chondrosarcoma. Also evaluated is the ability of the multiple modalities of radiation therapy, such as conventional fractionated radiotherapy, proton beam, and stereotactic radiosurgery, to control tumor growth. Finally, emerging therapies for the treatment of skull-base chondrosarcoma are discussed.", "title": "" }, { "docid": "a86bc0970dba249e1e53f9edbad3de43", "text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.", "title": "" }, { "docid": "b5a9bbf52279ce7826434b7e5d3ccbb6", "text": "We present our 11-layers deep, double-pathway, 3D Convolutional Neural Network, developed for the segmentation of brain lesions. The developed system segments pathology voxel-wise after processing a corresponding multi-modal 3D patch at multiple scales. We demonstrate that it is possible to train such a deep and wide 3D CNN on a small dataset of 28 cases. Our network yields promising results on the task of segmenting ischemic stroke lesions, accomplishing a mean Dice of 64% (66% after postprocessing) on the ISLES 2015 training dataset, ranking among the top entries. Regardless its size, our network is capable of processing a 3D brain volume in 3 minutes, making it applicable to the automated analysis of larger study cohorts.", "title": "" }, { "docid": "653b44b98c78bed426c0e5630145c2ba", "text": "In the field of non-monotonic logics, the notion of rational closure is acknowledged as a landmark, and we are going to see that such a construction can be characterised by means of a simple method in the context of propositional logic. We then propose an application of our approach to rational closure in the field of Description Logics, an important knowledge representation formalism, and provide a simple decision procedure for this case.", "title": "" }, { "docid": "daa7773486701deab7b0c69e1205a1d9", "text": "Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.", "title": "" }, { "docid": "c9766e95df62d747f5640b3cab412a3f", "text": "For the last 10 years, interest has grown in low frequency shear waves that propagate in the human body. However, the generation of shear waves by acoustic vibrators is a relatively complex problem, and the directivity patterns of shear waves produced by the usual vibrators are more complicated than those obtained for longitudinal ultrasonic transducers. To extract shear modulus parameters from the shear wave propagation in soft tissues, it is important to understand and to optimize the directivity pattern of shear wave vibrators. This paper is devoted to a careful study of the theoretical and the experimental directivity pattern produced by a point source in soft tissues. Both theoretical and experimental measurements show that the directivity pattern of a point source vibrator presents two very strong lobes for an angle around 35/spl deg/. This paper also points out the impact of the near field in the problem of shear wave generation.", "title": "" } ]
scidocsrr
2577cdc082a2d03bd66bf2e56128a68b
Making Learning and Web 2.0 Technologies Work for Higher Learning Institutions in Africa
[ { "docid": "b9e7fedbc42f815b35351ec9a0c31b33", "text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ", "title": "" } ]
[ { "docid": "90d33a2476534e542e2722d7dfa26c91", "text": "Despite some notable and rare exceptions and after many years of relatively neglect (particularly in the ‘upper echelons’ of IS research), there appears to be some renewed interest in Information Systems Ethics (ISE). This paper reflects on the development of ISE by assessing the use and development of ethical theory in contemporary IS research with a specific focus on the ‘leading’ IS journals (according to the Association of Information Systems). The focus of this research is to evaluate if previous calls for more theoretically informed work are permeating the ‘upper echelons’ of IS research and if so, how (Walsham 1996; Smith and Hasnas 1999; Bell and Adam 2004). For the purposes of scope, this paper follows on from those previous studies and presents a detailed review of the leading IS publications between 2005to2007 inclusive. After several processes, a total of 32 papers are evaluated. This review highlights that whilst ethical topics are becoming increasingly popular in such influential media, most of the research continues to neglect considerations of ethical theory with preferences for a range of alternative approaches. Finally, this research focuses on some of the papers produced and considers how the use of ethical theory could contribute.", "title": "" }, { "docid": "ed176e79496053f1c4fdee430d1aa7fc", "text": "Event recognition systems rely on knowledge bases of event definitions to infer occurrences of events in time. Using a logical framework for representing and reasoning about events offers direct connections to machine learning, via Inductive Logic Programming (ILP), thus allowing to avoid the tedious and error-prone task of manual knowledge construction. However, learning temporal logical formalisms, which are typically utilized by logic-based event recognition systems is a challenging task, which most ILP systems cannot fully undertake. In addition, event-based data is usually massive and collected at different times and under various circumstances. Ideally, systems that learn from temporal data should be able to operate in an incremental mode, that is, revise prior constructed knowledge in the face of new evidence. In this work we present an incremental method for learning and revising event-based knowledge, in the form of Event Calculus programs. The proposed algorithm relies on abductive–inductive learning and comprises a scalable clause refinement methodology, based on a compressive summarization of clause coverage in a stream of examples. We present an empirical evaluation of our approach on real and synthetic data from activity recognition and city transport applications.", "title": "" }, { "docid": "ab2c4d5317d2e10450513283c21ca6d3", "text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.", "title": "" }, { "docid": "90563706ada80e880b7fcf25489f9b27", "text": "We describe the large vocabulary automatic speech recognition system developed for Modern Standard Arabic by the SRI/Nightingale team, and used for the 2007 GALE evaluation as part of the speech translation system. We show how system performance is affected by different development choices, ranging from text processing and lexicon to decoding system architecture design. Word error rate results are reported on broadcast news and conversational data from the GALE development and evaluation test sets.", "title": "" }, { "docid": "1bc33dcf86871e70bd3b7856fd3c3857", "text": "A framework for clustered-dot color halftone watermarking is proposed. Watermark patterns are embedded in the color halftone on per-separation basis. For typical CMYK printing systems, common desktop RGB color scanners are unable to provide the individual colorant halftone separations, which confounds per-separation detection methods. Not only does the K colorant consistently appear in the scanner channels as it absorbs uniformly across the spectrum, but cross-couplings between CMY separations are also observed in the scanner color channels due to unwanted absorptions. We demonstrate that by exploiting spatial frequency and color separability of clustered-dot color halftones, estimates of the individual colorant halftone separations can be obtained from scanned RGB images. These estimates, though not perfect, allow per-separation detection to operate efficiently. The efficacy of this methodology is demonstrated using continuous phase modulation for the embedding of per-separation watermarks.", "title": "" }, { "docid": "0c88535a3696fe9e2c82f8488b577284", "text": "Touch gestures can be a very important aspect when developing mobile applications with enhanced reality. The main purpose of this research was to determine which touch gestures were most frequently used by engineering students when using a simulation of a projectile motion in a mobile AR applica‐ tion. A randomized experimental design was given to students, and the results showed the most commonly used gestures to visualize are: zoom in “pinch open”, zoom out “pinch closed”, move “drag” and spin “rotate”.", "title": "" }, { "docid": "04e9383039f64bf5ef90e59ba451e45f", "text": "The current generation of manufacturing systems relies on monolithic control software which provides real-time guarantees but is hard to adapt and reuse. These qualities are becoming increasingly important for meeting the demands of a global economy. Ongoing research and industrial efforts therefore focus on service-oriented architectures (SOA) to increase the control software’s flexibility while reducing development time, effort and cost. With such encapsulated functionality, system behavior can be expressed in terms of operations on data and the flow of data between operators. In this thesis we consider industrial real-time systems from the perspective of distributed data processing systems. Data processing systems often must be highly flexible, which can be achieved by a declarative specification of system behavior. In such systems, a user expresses the properties of an acceptable solution while the system determines a suitable execution plan that meets these requirements. Applied to the real-time control domain, this means that the user defines an abstract workflow model with global timing constraints from which the system derives an execution plan that takes the underlying system environment into account. The generation of a suitable execution plan often is NP-hard and many data processing systems rely on heuristic solutions to quickly generate high quality plans. We utilize heuristics for finding real-time execution plans. Our evaluation shows that heuristics were successful in finding a feasible execution plan in 99% of the examined test cases. Lastly, data processing systems are engineered for an efficient exchange of data and therefore are usually built around a direct data flow between the operators without a mediating entity in between. Applied to SOA-based automation, the same principle is realized through service choreographies with direct communication between the individual services instead of employing a service orchestrator which manages the invocation of all services participating in a workflow. These three principles outline the main contributions of this thesis: A flexible reconfiguration of SOA-based manufacturing systems with verifiable real-time guarantees, fast heuristics based planning, and a peer-to-peer execution model for SOAs with clear semantics. We demonstrate these principles within a demonstrator that is close to a real-world industrial system.", "title": "" }, { "docid": "ad6dc9f74e0fa3c544c4123f50812e14", "text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.", "title": "" }, { "docid": "0382ad43b6d31a347d9826194a7261ce", "text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.", "title": "" }, { "docid": "ed282d88b5f329490f390372c502f238", "text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.", "title": "" }, { "docid": "e87617852de3ce25e1955caf1f4c7a21", "text": "Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection algorithms. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using MATLAB 7.0. It has been shown that the Canny s edge detection algorithm performs better than all these operators under almost all scenarios. Evaluation of the images showed that under noisy conditions Canny, LoG( Laplacian of Gaussian), Robert, Prewitt, Sobel exhibit better performance, respectively. . It has been observed that Canny s edge detection algorithm is computationally more expensive compared to LoG( Laplacian of Gaussian), Sobel, Prewitt and Robert s operator CITED BY (354) 1 Gra a, R. F. P. S. O. (2012). Segmenta o de imagens tor cicas de Raio-X (Doctoral dissertation, UNIVERSIDADE DA BEIRA INTERIOR). 2 ZENDI, M., & YILMAZ, A. (2013). DEGISIK BAKIS A ILARINDAN ELDE EDILEN G R NT GRUPLARININ SINIFLANDIRILMASI. Journal of Aeronautics & Space Technolog ies/Havacilik ve Uzay Teknolojileri Derg is i, 6(1). 3 TROFINO, A. F. N. (2014). TRABALHO DE CONCLUS O DE CURSO. 4 Juan Albarrac n, J. (2011). Dise o, an lis is y optimizaci n de un s istema de reconocimiento de im genes basadas en contenido para imagen publicitaria (Doctoral dissertation). 5 Bergues, G., Ames, G., Canali, L., Schurrer, C., & Fles ia, A. G. (2014, June). Detecci n de l neas en im genes con ruido en un entorno de medici n de alta precis i n. In Biennial Congress of Argentina (ARGENCON), 2014 IEEE (pp. 582-587). IEEE. 6 Andrianto, D. S. (2013). Analisa Statistik terhadap perubahan beberapa faktor deteksi kemacetan melalui pemrosesan video beserta peng iriman notifikas i kemacetan. Jurnal Sarjana ITB bidang Teknik Elektro dan Informatika, 2(1). 7 Pier g , M., & Jaskowiec, J. Identyfikacja twarzy z wykorzystaniem Sztucznych Sieci Neuronowych oraz PCA. 8 Nugraha, K. A., Santoso, A. J., & Suselo, T. (2015, July). ALGORITMA BACKPROPAGATION PADA JARINGAN SARAF TIRUAN UNTUK PENGENALAN POLA WAYANG KULIT. In Seminar Nasional Informatika 2008 (Vol. 1, No. 4). 9 Cornet, T. (2012). Formation et D veloppement des Lacs de Titan: Interpr tation G omorpholog ique d'Ontario Lacus et Analogues Terrestres (Doctoral dissertation, Ecole Centrale de Nantes (ECN)(ECN)(ECN)(ECN)). 10 Li, L., Sun, L., Ning , G., & Tan, S. (2014). Automatic Pavement Crack Recognition Based on BP Neural Network. PROMET-Traffic&Transportation, 26(1), 11-22. 11 Quang Hong , N., Khanh Quoc, D., Viet Anh, N., Chien Van, T., ???, & ???. (2015). Rate Allocation for Block-based Compressive Sensing . Journal of Broadcast Eng ineering , 20(3), 398-407. 12 Swillo, S. (2013). Zastosowanie techniki wizyjnej w automatyzacji pomiar w geometrii i podnoszeniu jakosci wyrob w wytwarzanych w przemysle motoryzacyjnym. Prace Naukowe Politechniki Warszawskiej. Mechanika, (257), 3-128. 13 V zina, M. (2014). D veloppement de log iciels de thermographie infrarouge visant am liorer le contr le de la qualit de la pose de l enrob bitumineux. 14 Decourselle, T. (2014). Etude et mod lisation du comportement des gouttelettes de produits phytosanitaires sur les feuilles de vigne par imagerie ultra-rapide et analyse de texture (Doctoral dissertation, Univers it de Bourgogne). 15 Reja, I. D., & Santoso, A. J. (2013). Pengenalan Motif Sarung (Utan Maumere) Menggunakan Deteksi Tepi. Semantik, 3(1). 16 Feng , Y., & Chen, F. (2013). Fast volume measurement algorithm based on image edge detection. Journal of Computer Applications, 6, 064. 17 Krawczuk, A., & Dominczuk, J. (2014). The use of computer image analys is in determining adhesion properties . Applied Computer Science, 10(3), 68-77. 18 Hui, L., Park, M. W., & Brilakis , I. (2014). Automated Brick Counting for Fa ade Construction Progress Estimation. Journal of Computing in Civil Eng ineering , 04014091. 19 Mahmud, S., Mohammed, J., & Muaidi, H. (2014). A Survey of Dig ital Image Processing Techniques in Character Recognition. IJCSNS, 14(3), 65. 20 Yazdanparast, E., Dos Anjos , A., Garcia, D., Loeuillet, C., Shahbazkia, H. R., & Vergnes, B. (2014). INsPECT, an Open-Source and Versatile Software for Automated Quantification of (Leishmania) Intracellular Parasites . 21 Furtado, L. F. F., Trabasso, L. G., Villani, E., & Francisco, A. (2012, December). Temporal filter applied to image sequences acquired by an industrial robot to detect defects in large aluminum surfaces areas. In MECHATRONIKA, 2012 15th International Symposium (pp. 1-6). IEEE. 22 Zhang , X. H., Li, G., Li, C. L., Zhang , H., Zhao, J., & Hou, Z. X. (2015). Stereo Matching Algorithm Based on 2D Delaunay Triangulation. Mathematical Problems in Eng ineering , 501, 137193. 23 Hasan, H. M. Image Based Vehicle Traffic Measurement. 24 Taneja, N. PERFORMANCE EVALUATION OF IMAGE SEGMENTATION TECHNIQUES USED FOR QUALITATIVE ANALYSIS OF MEMBRANE FILTER. 25 Mathur, A., & Mathur, R. (2013). Content Based Image Retrieval by Multi Features us ing Image Blocks. International Journal of Advanced Computer Research, 3(4), 251. 26 Pandey, A., Pant, D., & Gupta, K. K. (2013). A Novel Approach on Color Image Refocusing and Defocusing . International Journal of Computer Applications, 73(3), 13-17. 27 S le, I. (2014). The determination of the twist level of the Chenille yarn using novel image processing methods: Extraction of axial grey-level characteristic and multi-step gradient based thresholding . Dig ital Signal Processing , 29, 78-99. 28 Azzabi, T., Amor, S. B., & Nejim, S. (2014, November). Obstacle detection for Unmanned Surface Vehicle. In Electrical Sciences and Technolog ies in Maghreb (CISTEM), 2014 International Conference on (pp. 1-7). IEEE. 29 Zacharia, K., Elias , E. P., & Varghese, S. M. (2012). Personalised product design using virtual interactive techniques. arXiv preprint arXiv:1202.1808. 30 Kim, J. H., & Lattimer, B. Y. (2015). Real-time probabilis tic class ification of fire and smoke using thermal imagery for intelligent firefighting robot. Fire Safety Journal, 72, 40-49. 31 N ez, J. M. Edge detection for Very High Resolution Satellite Imagery based on Cellular Neural Network. Advances in Pattern Recognition, 55. 32 Capobianco, J., Pallone, G., & Daudet, L. (2012, October). Low Complexity Transient Detection in Audio Coding Using an Image Edge Detection Approach. In Audio Eng ineering Society Convention 133. Audio Eng ineering Society. 33 zt rk, S., & Akdemir, B. (2015). Comparison of Edge Detection Algorithms for Texture Analys is on Glass Production. Procedia-Social and Behavioral Sciences, 195, 2675-2682. 34 Ahmed, A. M., & Elramly, S. Hyperspectral Data Compression Based On Weighted Prediction. 35 Jayas, D. S. A. Manickavasagan, HN Al-Shekaili, G. Thomas, MS Rahman, N. Guizani &. 36 Khashu, S., Vijayanagar, S., Manikantan, K., & Ramachandran, S. (2014, February). Face Recognition using Dual Wavelet Transform and Filter-Transformed Flipping . In Electronics and Communication Systems (ICECS), 2014 International Conference on (pp. 1-7). IEEE. 37 Brown, R. C. (2014). IRIS: Intelligent Roadway Image Segmentation using an Adaptive Reg ion of Interest (Doctoral dissertation, Virg inia Polytechnic Institute and State Univers ity). 38 Huang , L., Zuo, X., Fang , Y., & Yu, X. A Segmentation Algorithm for Remote Sensing Imag ing Based on Edge and Heterogeneity of Objects . 39 Park, J., Kim, Y., & Kim, S. (2015). Landing Site Searching and Selection Algorithm Development Using Vis ion System and Its Application to Quadrotor. Control Systems Technology, IEEE Transactions on, 23(2), 488-503. 40 Sikchi, P., Beknalkar, N., & Rane, S. Real-Time Cartoonization Using Raspberry Pi. 41 Bachmakov, E., Molina, C., & Wynne, R. (2014, March). Image-based spectroscopy for environmental monitoring . In SPIE Smart Structures and Materials+ Nondestructive Evaluation and Health Monitoring (pp. 90620B-90620B). International Society for Optics and Photonics . 42 Kulyukin, V., & Zaman, T. (2014). Vis ion-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera Alignment Constraints . International Journal of Image Processing (IJIP), 8(5), 355. 43 Sandhu, E. M. S., Mutneja, E. V., & Nishi, E. Image Edge Detection by Using Rule Based Fuzzy Class ifier. 44 Tarwani, K. M., & Bhoyar, K. K. Approaches to Gender Class ification using Facial Images. 45 Kuppili, S. K., & Prasad, P. M. K. (2015). Design of Area Optimized Sobel Edge Detection. In Computational Intelligence in Data Mining-Volume 2 (pp. 647-655). Springer India. 46 Singh, R. K., Shaw, D. K., & Alam, M. J. (2015). Experimental Studies of LSB Watermarking with Different Noise. Procedia Computer Science, 54, 612-620. 47 Xu, Y., Da-qiao, Z., Da-wei, D., Bo, W., & Chao-nan, T. (2014, July). A speed monitoring method in steel pipe of 3PE-coating process based on industrial Charge-coupled Device. In Control Conference (CCC), 2014 33rd Chinese (pp. 2908-2912). IEEE. 48 Yasiran, S. S., Jumaat, A. K., Malek, A. A., Hashim, F. H., Nasrir, N., Hassan, S. N. A. S., ... & Mahmud, R. (1987). Microcalcifications Segmentation using Three Edge Detection Techniques on Mammogram Images. 49 Roslan, N., Reba, M. N. M., Askari, M., & Halim, M. K. A. (2014, February). Linear and non-linear enhancement for sun g lint reduction in advanced very high resolution radiometer (AVHRR) image. In IOP Conference Series : Earth and Environmental Science (Vol. 18, No. 1, p. 012041). IOP Publishing . 50 Gupta, P. K. D., Pattnaik, S., & Nayak, M. (2014). Inter-level Spatial Cloud Compression Algorithm. Defence Science Journal, 64(6), 536-541. 51 Foster, R. (2015). A comparison of machine learning techniques for hand shape recogn", "title": "" }, { "docid": "b2e493de6e09766c4ddbac7de071e547", "text": "In this paper we describe and evaluate some recently innovated coupling metrics for object oriented OO design The Coupling Between Objects CBO metric of Chidamber and Kemerer C K are evaluated empirically using ve OO systems and compared with an alternative OO design metric called NAS which measures the Number of Associations between a class and its peers The NAS metric is directly collectible from design documents such as the Object Model of OMT Results from all systems studied indicate a strong relationship between CBO and NAS suggesting that they are not orthogonal We hypothesised that coupling would be related to understandability the number of errors and error density No relationships were found for any of the systems between class understandability and coupling However we did nd partial support for our hypothesis linking increased coupling to increased error density The work described in this paper is part of the Metrics for OO Programming Systems MOOPS project which aims are to evaluate existing OO metrics and to innovate and evaluate new OO analysis and design metrics aimed speci cally at the early stages of development", "title": "" }, { "docid": "49f21df66ac901e5f37cff022353ed20", "text": "This paper presents the implementation of the interval type-2 to control the process of production of High-strength low-alloy (HSLA) steel in a secondary metallurgy process in a simply way. The proposal evaluate fuzzy techniques to ensure the accuracy of the model, the most important advantage is that the systems do not need pretreatment of the historical data, it is used as it is. The system is a multiple input single output (MISO) and the main goal of this paper is the proposal of a system that optimizes the resources: computational, time, among others.", "title": "" }, { "docid": "c070020d88fb77f768efa5f5ac2eb343", "text": "This paper provides a critical overview of the theoretical, analytical, and practical questions most prevalent in the study of the structural and the sociolinguistic dimensions of code-switching (CS). In doing so, it reviews a range of empirical studies from around the world. The paper first looks at the linguistic research on the structural features of CS focusing in particular on the code-switching versus borrowing distinction, and the syntactic constraints governing its operation. It then critically reviews sociological, anthropological, and linguistic perspectives dominating the sociolinguistic research on CS over the past three decades. Major empirical studies on the discourse functions of CS are discussed, noting the similarities and differences between socially motivated CS and style-shifting. Finally, directions for future research on CS are discussed, giving particular emphasis to the methodological issue of its applicability to the analysis of bilingual classroom interaction.", "title": "" }, { "docid": "77796f30d8d1604c459fb3f3fe841515", "text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: [email protected] (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X", "title": "" }, { "docid": "885a51f55d5dfaad7a0ee0c56a64ada3", "text": "This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to find local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs significantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program’s first-place finish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.", "title": "" }, { "docid": "c17e6363762e0e9683b51c0704d43fa7", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "15886d83be78940609c697b30eb73b13", "text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.", "title": "" }, { "docid": "9b7ff8a7dec29de5334f3de8d1a70cc3", "text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.", "title": "" }, { "docid": "1d29f224933954823228c25e5e99980e", "text": "This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property, social impact, safety and quality, net integrity and information integrity. 2005 Elsevier Ltd. All rights reserved.", "title": "" } ]
scidocsrr
01eb7e40fc907559056c1c5eb1c04c12
Data Mining Model for Predicting Student Enrolment in STEM Courses in Higher Education Institutions
[ { "docid": "f7a36f939cbe9b1d403625c171491837", "text": "This paper explores the socio-demographic variables (age, gender, ethnicity, education, work status, and disability) and study environment (course programme and course block), that may influence persistence or dropout of students at the Open Polytechnic of New Zealand. We examine to what extent these factors, i.e. enrolment data help us in pre-identifying successful and unsuccessful students. The data stored in the Open Polytechnic student management system from 2006 to 2009, covering over 450 students who enrolled to 71150 Information Systems course was used to perform a quantitative analysis of study outcome. Based on a data mining techniques (such as feature selection and classification trees), the most important factors for student success and a profile of the typical successful and unsuccessful students are identified. The empirical results show the following: (i) the most important factors separating successful from unsuccessful students are: ethnicity, course programme and course block; (ii) among classification tree growing methods Classification and Regression Tree (CART) was the most successful in growing the tree with an overall percentage of correct classification of 60.5%; and (iii) both the risk estimated by the cross-validation and the gain diagram suggests that all trees, based only on enrolment data are not quite good in separating successful from unsuccessful students. The implications of these results for academic and administrative staff are discussed.", "title": "" }, { "docid": "055faaaa14959a204ca19a4962f6e822", "text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom", "title": "" }, { "docid": "120452d49d476366abcb52b86d8110b5", "text": "Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naïve Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.", "title": "" } ]
[ { "docid": "a7317f06cf34e501cb169bdf805e7e34", "text": "It's natural to promote your best and brightest, especially when you think they may leave for greener pastures if you don't continually offer them new challenges and rewards. But promoting smart, ambitious young managers too quickly often robs them of the chance to develop the emotional competencies that come with time and experience--competencies like the ability to negotiate with peers, regulate emotions in times of crisis, and win support for change. Indeed, at some point in a manager's career--usually at the vice president level--raw talent and ambition become less important than the ability to influence and persuade, and that's the point at which the emotionally immature manager will lose his effectiveness. This article argues that delaying a promotion can sometimes be the best thing a senior executive can do for a junior manager. The inexperienced manager who is given time to develop his emotional competencies may be better prepared for the interpersonal demands of top-level leadership. The authors recommend that senior executives employ these strategies to help boost their protégés' people skills: sharpen the 360-degree feedback process, give managers cross-functional assignments to improve their negotiation skills, make the development of emotional competencies mandatory, make emotional competencies a performance measure, and encourage managers to develop informal learning partnerships with peers and mentors. Delaying a promotion can be difficult given the steadfast ambitions of many junior executives and the hectic pace of organizational life. It may mean going against the norm of promoting people almost exclusively on smarts and business results. It may also mean contending with the disappointment of an esteemed subordinate. But taking the time to build people's emotional competencies isn't an extravagance; it's critical to developing effective leaders.", "title": "" }, { "docid": "64139426292bc1744904a0758b6caed1", "text": "The quantity and complexity of available information is rapidly increasing. This potential information overload challenges the standard information retrieval models, as users find it increasingly difficult to find relevant information. We therefore propose a method that can utilize the potentially valuable knowledge contained in concept models such as ontologies, and thereby assist users in querying, using the terminology of the domain. The primary focus of this dissertation is similarity measures for use in ontology-based information retrieval. We aim at incorporating the information contained in ontologies by choosing a representation formalism where queries and objects in the information base are described using a lattice-algebraic concept language containing expressions that can be directly mapped into the ontology. Similarity between the description of the query and descriptions of the objects is calculated based on a nearness principle derived from the structure and relations of the ontology. This measure is then used to perform ontology-based query expansion. By doing so, we can replace semantic matching from direct reasoning over the ontology with numerical similarity calculation by means of a general aggregation principle The choice of the proposed similarity measure is guided by a set of properties aimed at ensuring the measures accordance with a set of distinctive structural qualities derived from the ontology. We furthermore empirically evaluate the proposed similarity measure by comparing the similarity ratings for pairs of concepts produced by the proposed measure, with the mean similarity ratings produced by humans for the same pairs.", "title": "" }, { "docid": "710e81da55d50271b55ac9a4f2d7f986", "text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "bb314530c796fbec6679a4a0cc6cd105", "text": "The undergraduate computer science curriculum is generally focused on skills and tools; most students are not exposed to much research in the field, and do not learn how to navigate the research literature. We describe how science fiction reviews were used as a gateway to research reviews. Students learn a little about current or recent research on a topic that stirs their imagination, and learn how to search for, read critically, and compare technical papers on a topic related their chosen science fiction book, movie, or TV show.", "title": "" }, { "docid": "371dad2a860f7106f10fd1f204afd3f2", "text": "Increased neuromuscular excitability with varying clinical and EMG features were also observed during KCl administration in both cases. The findings are discussed on the light of the membrane ionic gradients current theory.", "title": "" }, { "docid": "eaeccd0d398e0985e293d680d2265528", "text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.", "title": "" }, { "docid": "10e41955aea6710f198744ac1f201d64", "text": "Current research on culture focuses on independence and interdependence and documents numerous East-West psychological differences, with an increasing emphasis placed on cognitive mediating mechanisms. Lost in this literature is a time-honored idea of culture as a collective process composed of cross-generationally transmitted values and associated behavioral patterns (i.e., practices). A new model of neuro-culture interaction proposed here addresses this conceptual gap by hypothesizing that the brain serves as a crucial site that accumulates effects of cultural experience, insofar as neural connectivity is likely modified through sustained engagement in cultural practices. Thus, culture is \"embrained,\" and moreover, this process requires no cognitive mediation. The model is supported in a review of empirical evidence regarding (a) collective-level factors involved in both production and adoption of cultural values and practices and (b) neural changes that result from engagement in cultural practices. Future directions of research on culture, mind, and the brain are discussed.", "title": "" }, { "docid": "d5b986cf02b3f9b01e5307467c1faec2", "text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.", "title": "" }, { "docid": "d39843f342646e4d338ab92bb7391d76", "text": "In this paper, a double-axis planar micro-fluxgate magnetic sensor and its front-end circuitry are presented. The ferromagnetic core material, i.e., the Vitrovac 6025 X, has been deposited on top of the coils with the dc-magnetron sputtering technique, which is a new type of procedure with respect to the existing solutions in the field of fluxgate sensors. This procedure allows us to obtain a core with the good magnetic properties of an amorphous ferromagnetic material, which is typical of a core with 25-mum thickness, but with a thickness of only 1 mum, which is typical of an electrodeposited core. The micro-Fluxgate has been realized in a 0.5- mum CMOS process using copper metal lines to realize the excitation coil and aluminum metal lines for the sensing coil, whereas the integrated interface circuitry for exciting and reading out the sensor has been realized in a 0.35-mum CMOS technology. Applying a triangular excitation current of 18 mA peak at 100 kHz, the magnetic sensitivity achieved is about 10 LSB/muT [using a 13-bit analog-to-digital converter (ADC)], which is suitable for detecting the Earth's magnetic field (plusmn60 muT), whereas the linearity error is 3% of the full scale. The maximum angle error of the sensor evaluating the Earth magnetic field is 2deg. The power consumption of the sensor is about 13.7 mW. The total power consumption of the system is about 90 mW.", "title": "" }, { "docid": "7d0d68f2dd9e09540cb2ba71646c21d2", "text": "INTRODUCTION: Back in time dentists used to place implants in locations with sufficient bone-dimensions only, with less regard to placement of final definitive restoration but most of the times, the placement of implant is not as accurate as intended and even a minor variation in comparison to ideal placement causes difficulties in fabrication of final prosthesis. The use of bone substitutes and membranes is now one of the standard therapeutic approaches. In order to accelerate healing of bone graft over the bony defect, numerous techniques utilizing platelet and fibrinogen concentrates have been introduced in the literature.. OBJECTIVES: This study was designed to evaluate the efficacy of using Autologous Concentrated Growth Factors (CGF) Enriched Bone Graft Matrix (Sticky Bone) and CGF-Enriched Fibrin Membrane in management of dehiscence defect around dental implant in narrow maxillary anterior ridge. MATERIALS AND METHODS: Eleven DIO implants were inserted in six adult patients presenting an upper alveolar ridge width of less than 4mm determined by cone beam computed tomogeraphy (CBCT). After implant placement, the resultant vertical labial dehiscence defect was augmented utilizing Sticky Bone and CGF-Enriched Fibrin Membrane. Three CBCTs were made, pre-operatively, immediately postoperatively and six-months post-operatively. The change in vertical defect size was calculated radiographically then statistically analyzed. RESULTS: Vertical dehiscence defect was sufficiently recovered in 5 implant-sites while in the other 6 sites it was decreased to mean value of 1.25 mm ± 0.69 SD, i.e the defect coverage in 6 implants occurred with mean value of 4.59 mm ±0.49 SD. Also the results of the present study showed that the mean of average implant stability was 59.89 mm ± 3.92 CONCLUSIONS: The combination of PRF mixed with CGF with bone graft (allograft) can increase the quality (density) of the newly formed bone and enhance the rate of new bone formation.", "title": "" }, { "docid": "c7d23af5ad79d9863e83617cf8bbd1eb", "text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.", "title": "" }, { "docid": "bb8b6d2424ef7709aa1b89bc5d119686", "text": "We have applied a Long Short-Term Memory neural network to model S&P 500 volatility, incorporating Google domestic trends as indicators of the public mood and macroeconomic factors. In a held-out test set, our Long Short-Term Memory model gives a mean absolute percentage error of 24.2%, outperforming linear Ridge/Lasso and autoregressive GARCH benchmarks by at least 31%. This evaluation is based on an optimal observation and normalization scheme which maximizes the mutual information between domestic trends and daily volatility in the training set. Our preliminary investigation shows strong promise for better predicting stock behavior via deep learning and neural network models.", "title": "" }, { "docid": "8e8dcbc4eacf7484a44b4b6647fcfdb2", "text": "BACKGROUND\nWith the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics.\n\n\nDESCRIPTION\nThis paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications.\n\n\nCONCLUSION\nTopic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.", "title": "" }, { "docid": "b5b6fc6ce7690ae8e49e1951b08172ce", "text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.", "title": "" }, { "docid": "77985effa998d08e75eaa117e07fc7a9", "text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.", "title": "" }, { "docid": "2269c84a2725605242790cf493425e0c", "text": "Tissue engineering aims to improve the function of diseased or damaged organs by creating biological substitutes. To fabricate a functional tissue, the engineered construct should mimic the physiological environment including its structural, topographical, and mechanical properties. Moreover, the construct should facilitate nutrients and oxygen diffusion as well as removal of metabolic waste during tissue regeneration. In the last decade, fiber-based techniques such as weaving, knitting, braiding, as well as electrospinning, and direct writing have emerged as promising platforms for making 3D tissue constructs that can address the abovementioned challenges. Here, we critically review the techniques used to form cell-free and cell-laden fibers and to assemble them into scaffolds. We compare their mechanical properties, morphological features and biological activity. We discuss current challenges and future opportunities of fiber-based tissue engineering (FBTE) for use in research and clinical practice.", "title": "" }, { "docid": "93f2fb12d61f3acb2eb31f9a2335b9c3", "text": "Cluster identification in large scale information network is a highly attractive issue in the network knowledge mining. Traditionally, community detection algorithms are designed to cluster object population based on minimizing the cutting edge number. Recently, researchers proposed the concept of higher-order clustering framework to segment network objects under the higher-order connectivity patterns. However, the essences of the numerous methodologies are focusing on mining the homogeneous networks to identify groups of objects which are closely related to each other, indicating that they ignore the heterogeneity of different types of objects and links in the networks. In this study, we propose an integrated framework of heterogeneous information network structure and higher-order clustering for mining the hidden relationship, which include three major steps: (1) Construct the heterogeneous network, (2) Convert HIN to Homogeneous network, and (3) Community detection.", "title": "" }, { "docid": "226d474f5d0278f81bcaf7203706486b", "text": "Human pose estimation is a well-known computer vision problem that receives intensive research interest. The reason for such interest is the wide range of applications that the successful estimation of human pose offers. Articulated pose estimation includes real time acquisition, analysis, processing and understanding of high dimensional visual information. Ensemble learning methods operating on hand-engineered features have been commonly used for addressing this task. Deep learning exploits representation learning methods to learn multiple levels of representations from raw input data, alleviating the need to hand-crafted features. Deep convolutional neural networks are achieving the state-of-the-art in visual object recognition, localization, detection. In this paper, the pose estimation task is formulated as an offset joint regression problem. The 3D joints positions are accurately detected from a single raw depth image using a deep convolutional neural networks model. The presented method relies on the utilization of the state-of-the-art data generation pipeline to generate large, realistic, and highly varied synthetic set of training images. Analysis and experimental results demonstrate the generalization performance and the real time successful application of the proposed method.", "title": "" }, { "docid": "49d5f6fdc02c777d42830bac36f6e7e2", "text": "Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through “guideposts.” A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the “strongest” instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at “nearby” guideposts by issuing “guidepost queries” containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.", "title": "" }, { "docid": "b261534c045299c1c3a0e0cc37caa618", "text": "Michelangelo (1475-1564) had a life-long interest in anatomy that began with his participation in public dissections in his early teens, when he joined the court of Lorenzo de' Medici and was exposed to its physician-philosopher members. By the age of 18, he began to perform his own dissections. His early anatomic interests were revived later in life when he aspired to publish a book on anatomy for artists and to collaborate in the illustration of a medical anatomy text that was being prepared by the Paduan anatomist Realdo Colombo (1516-1559). His relationship with Colombo likely began when Colombo diagnosed and treated him for nephrolithiasis in 1549. He seems to have developed gouty arthritis in 1555, making the possibility of uric acid stones a distinct probability. Recurrent urinary stones until the end of his life are well documented in his correspondence, and available documents imply that he may have suffered from nephrolithiasis earlier in life. His terminal illness with symptoms of fluid overload suggests that he may have sustained obstructive nephropathy. That this may account for his interest in kidney function is evident in his poetry and drawings. Most impressive in this regard is the mantle of the Creator in his painting of the Separation of Land and Water in the Sistine Ceiling, which is in the shape of a bisected right kidney. His use of the renal outline in a scene representing the separation of solids (Land) from liquid (Water) suggests that Michelangelo was likely familiar with the anatomy and function of the kidney as it was understood at the time.", "title": "" } ]
scidocsrr
4f1f89811a3891b2e81d9aae26096368
Compositional Falsification of Cyber-Physical Systems with Machine Learning Components
[ { "docid": "88a1549275846a4fab93f5727b19e740", "text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.", "title": "" } ]
[ { "docid": "f9d44eac4e07ed72e59d1aa194105615", "text": "Each human intestine harbours not only hundreds of trillions of bacteria but also bacteriophage particles, viruses, fungi and archaea, which constitute a complex and dynamic ecosystem referred to as the gut microbiota. An increasing number of data obtained during the last 10 years have indicated changes in gut bacterial composition or function in type 2 diabetic patients. Analysis of this ‘dysbiosis’ enables the detection of alterations in specific bacteria, clusters of bacteria or bacterial functions associated with the occurrence or evolution of type 2 diabetes; these bacteria are predominantly involved in the control of inflammation and energy homeostasis. Our review focuses on two key questions: does gut dysbiosis truly play a role in the occurrence of type 2 diabetes, and will recent discoveries linking the gut microbiota to host health be helpful for the development of novel therapeutic approaches for type 2 diabetes? Here we review how pharmacological, surgical and nutritional interventions for type 2 diabetic patients may impact the gut microbiota. Experimental studies in animals are identifying which bacterial metabolites and components act on host immune homeostasis and glucose metabolism, primarily by targeting intestinal cells involved in endocrine and gut barrier functions. We discuss novel approaches (e.g. probiotics, prebiotics and faecal transfer) and the need for research and adequate intervention studies to evaluate the feasibility and relevance of these new therapies for the management of type 2 diabetes.", "title": "" }, { "docid": "8f9f1bdc6f41cb5fd8b285a9c41526c1", "text": "The rivalry between the cathode-ray tube and flat-panel displays (FPDs) has intensified as performance of some FPDs now exceeds that of that entrenched leader in many cases. Besides the wellknown active-matrix-addressed liquid-crystal display, plasma, organic light-emitting diodes, and liquid-crystal-on-silicon displays are now finding new applications as the manufacturing, process engineering, materials, and cost structures become standardized and suitable for large markets.", "title": "" }, { "docid": "2ab2280b7821ae6ad27fff995fd36fe0", "text": "Recent years have seen the development of a satellite communication system called a high-throughput satellite (HTS), which enables large-capacity communication to cope with various communication demands. Current HTSs have a fixed allocation of communication resources and cannot flexibly change this allocation during operation. Thus, effectively allocating communication resources for communication demands with a bias is not possible. Therefore, technology is being developed to add flexibility to satellite communication systems, but there is no system analysis model available to quantitatively evaluate the flexibility performance. In this study, we constructed a system analysis model to quantitatively evaluate the flexibility of a satellite communication system and used it to analyze a satellite communication system equipped with a digital channelizer.", "title": "" }, { "docid": "9fc6244b3d0301a8486d44d58cf95537", "text": "The aim of this paper is to explore some, ways of linking ethnographic studies of work in context with the design of CSCW systems. It uses examples from an interdisciplinary collaborative project on air traffic control. Ethnographic methods are introduced, and applied to identifying the social organization of this cooperative work, and the use of instruments within it. On this basis some metaphors for the electronic representation of current manual practices are presented, and their possibilities and limitations are discussed.", "title": "" }, { "docid": "d57072f4ffa05618ebf055824e7ae058", "text": "Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network’s members and non-members; we analyze the impact of privacy concerns on members’ behavior; we compare members’ stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual’s privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members’ misconceptions about the online community’s actual size and composition, and about the visibility of members’ profiles.", "title": "" }, { "docid": "2e16ba9c13525dee6831d0a5c66a0671", "text": "1.1 Equivalent de nitions of a stable distribution : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 Properties of stable random variables : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :10 1.3 Symmetric -stable random variables : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :20 1.4 Series representation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21 1.5 Series representation of skewed -stable random variables : : : : : : : : : : : : : : : : : : : : : : : : 30 1.6 Graphs and tables of -stable densities and c.d.f.'s : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :35 1.7 Simulation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :41 1.8 Exercises : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 49", "title": "" }, { "docid": "0eb3d3c33b62c04ed5d34fc3a38b5182", "text": "We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.", "title": "" }, { "docid": "d16ec1f4c32267a07b1453d45bc8a6f2", "text": "Knowledge representation learning (KRL), exploited by various applications such as question answering and information retrieval, aims to embed the entities and relations contained by the knowledge graph into points of a vector space such that the semantic and structure information of the graph is well preserved in the representing space. However, the previous works mainly learned the embedding representations by treating each entity and relation equally which tends to ignore the inherent imbalance and heterogeneous properties existing in knowledge graph. By visualizing the representation results obtained from classic algorithm TransE in detail, we reveal the disadvantages caused by this homogeneous learning strategy and gain insight of designing policy for the homogeneous representation learning. In this paper, we propose a novel margin-based pairwise representation learning framework to be incorporated into many KRL approaches, with the method of introducing adaptivity according to the degree of knowledge heterogeneity. More specially, an adaptive margin appropriate to separate the real samples from fake samples in the embedding space is first proposed based on the sample’s distribution density, and then an adaptive weight is suggested to explicitly address the trade-off between the different contributions coming from the real and fake samples respectively. The experiments show that our Adaptive Weighted Margin Learning (AWML) framework can help the previous work achieve a better performance on real-world Knowledge Graphs Freebase and WordNet in the tasks of both link prediction and triplet classification.", "title": "" }, { "docid": "b6fdde5d6baeb546fd55c749af14eec1", "text": "Action recognition is an important research problem of human motion analysis (HMA). In recent years, 3D observation-based action recognition has been receiving increasing interest in the multimedia and computer vision communities, due to the recent advent of cost-effective sensors, such as depth camera Kinect. This work takes this one step further, focusing on early recognition of ongoing 3D human actions, which is beneficial for a large variety of time-critical applications, e.g., gesture-based human machine interaction, somatosensory games, and so forth. Our goal is to infer the class label information of 3D human actions with partial observation of temporally incomplete action executions. By considering 3D action data as multivariate time series (m.t.s.) synchronized to a shared common clock (frames), we propose a stochastic process called dynamic marked point process (DMP) to model the 3D action as temporal dynamic patterns, where both timing and strength information are captured. To achieve even more early and better accuracy of recognition, we also explore the temporal dependency patterns between feature dimensions. A probabilistic suffix tree is constructed to represent sequential patterns among features in terms of the variable-order Markov model (VMM). Our approach and several baselines are evaluated on five 3D human action datasets. Extensive results show that our approach achieves superior performance for early recognition of 3D human actions.", "title": "" }, { "docid": "9ea9b364e2123d8917d4a2f25e69e084", "text": "Movement observation and imagery are increasingly propagandized for motor rehabilitation. Both observation and imagery are thought to improve motor function through repeated activation of mental motor representations. However, it is unknown what stimulation parameters or imagery conditions are optimal for rehabilitation purposes. A better understanding of the mechanisms underlying movement observation and imagery is essential for the optimization of functional outcome using these training conditions. This study systematically assessed the corticospinal excitability during rest, observation, imagery and execution of a simple and a complex finger-tapping sequence in healthy controls using transcranial magnetic stimulation (TMS). Observation was conducted passively (without prior instructions) as well as actively (in order to imitate). Imagery was performed visually and kinesthetically. A larger increase in corticospinal excitability was found during active observation in comparison with passive observation and visual or kinesthetic imagery. No significant difference between kinesthetic and visual imagery was found. Overall, the complex task led to a higher corticospinal excitability in comparison with the simple task. In conclusion, the corticospinal excitability was modulated during both movement observation and imagery. Specifically, active observation of a complex motor task resulted in increased corticospinal excitability. Active observation may be more effective than imagery for motor rehabilitation purposes. In addition, the activation of mental motor representations may be optimized by varying task-complexity.", "title": "" }, { "docid": "f03f84bfa290fd3d1df6d9249cd9d8a6", "text": "We suggest a new technique to reduce energy consumption in the processor datapath without sacrificing performance by exploiting operand value locality at run time. Data locality is one of the major characteristics of video streams as well as other commonly used applications. We use a cache-like scheme to store a selective history of computation results, and the resultant Te-e-21se leads to power savings. The cache is indexed by the OpeTandS. Based on OUT model, an 8 to 128 entry execution cache TedUCeS power consumption by 20% to 60%.", "title": "" }, { "docid": "647ff27223a27396ffc15c24c5ff7ef1", "text": "Mobile phones are increasingly used for security sensitive activities such as online banking or mobile payments. This usually involves some cryptographic operations, and therefore introduces the problem of securely storing the corresponding keys on the phone. In this paper we evaluate the security provided by various options for secure storage of key material on Android, using either Android's service for key storage or the key storage solution in the Bouncy Castle library. The security provided by the key storage service of the Android OS depends on the actual phone, as it may or may not make use of ARM TrustZone features. Therefore we investigate this for different models of phones.\n We find that the hardware-backed version of the Android OS service does offer device binding -- i.e. keys cannot be exported from the device -- though they could be used by any attacker with root access. This last limitation is not surprising, as it is a fundamental limitation of any secure storage service offered from the TrustZone's secure world to the insecure world. Still, some of Android's documentation is a bit misleading here.\n Somewhat to our surprise, we find that in some respects the software-only solution of Bouncy Castle is stronger than the Android OS service using TrustZone's capabilities, in that it can incorporate a user-supplied password to secure access to keys and thus guarantee user consent.", "title": "" }, { "docid": "8c28ec4f3dd42dc9d53fed2e930f7a77", "text": "If a theory of concept composition aspires to psychological plausibility, it may first need to address several preliminary issues associated with naturally occurring human concepts: content variability, multiple representational forms, and pragmatic constraints. Not only do these issues constitute a significant challenge for explaining individual concepts, they pose an even more formidable challenge for explaining concept compositions. How do concepts combine as their content changes, as different representational forms become active, and as pragmatic constraints shape processing? Arguably, concepts are most ubiquitous and important in compositions, relative to when they occur in isolation. Furthermore, entering into compositions may play central roles in producing the changes in content, form, and pragmatic relevance observed for individual concepts. Developing a theory of concept composition that embraces and illuminates these issues would not only constitute a significant contribution to the study of concepts, it would provide insight into the nature of human cognition. The human ability to construct and combine concepts is prolific. On the one hand, people acquire tens of thousands of concepts for diverse categories of settings, agents, objects, actions, mental states, bodily states, properties, relations, and so forth. On the other, people combine these concepts to construct infinite numbers of more complex concepts, as the open-ended phrases, sentences, and texts that humans produce effortlessly and ubiquitously illustrate. Major changes in the brain, the emergence of language, and new capacities for social cognition all probably played central roles in the evolution of these impressive conceptual abilities (e.g., Deacon 1997; Donald 1993; Tomasello 2009). In psychology alone, much research addresses human concepts (e.g., Barsalou 2012;Murphy 2002; Smith andMedin 1981) and concept composition (often referred to as conceptual combination; e.g., Costello and Keane 2000; Gagné and Spalding 2014; Hampton 1997; Hampton and Jönsson 2012;Medin and Shoben 1988;Murphy L.W. Barsalou (✉) Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland e-mail: [email protected] © The Author(s) 2017 J.A. Hampton and Y. Winter (eds.), Compositionality and Concepts in Linguistics and Psychology, Language, Cognition, and Mind 3, DOI 10.1007/978-3-319-45977-6_2 9 1988;Wisniewski 1997;Wu andBarsalou 2009).More generally across the cognitive sciences, much additional research addresses concepts and the broader construct of compositionality (for a recent collection, see Werning et al. 2012). 1 Background Framework A grounded approach to concepts. Here I assume that a concept is a dynamical distributed network in the brain coupled with a category in the environment or experience, with this network guiding situated interactions with the category’s instances (for further detail, see Barsalou 2003b, 2009, 2012, 2016a, 2016b). The concept of bicycle, for example, represents and guides interactions with the category of bicycles in the world. Across interactions with a category’s instances, a concept develops in memory by aggregating information from perception, action, and internal states. Thus, the concept of bicycle develops from aggregating multimodal information related to bicycles across the situations in which they are experienced. As a consequence of using selective attention to extract information relevant to the concept of bicycle from the current situation (e.g., a perceived bicycle), and then using integration mechanisms to integrate it with other bicycle information already in memory, aggregate information for the category develops continually (Barsalou 1999). As described later, however, background situational knowledge is also captured that plays important roles in conceptual processing (Barsalou 2016b, 2003b; Yeh and Barsalou 2006). Although learning plays central roles in establishing concepts, genetic and epigenetic processes constrain the features that can be represented for a concept, and also their integration in the brain’s association areas (e.g., Simmons and Barsalou 2003). For example, biologically-based neural circuits may anticipate the conceptual structure of evolutionarily important concepts, such as agents, minds, animals, foods, and tools. Once the conceptual system is in place, it supports virtually all other forms of cognitive activity, both online in the current situation and offline when representing the world in language, memory, and thought (e.g., Barsalou 2012, 2016a, 2016b). From the perspective developed here, when conceptual knowledge is needed for a task, concepts produce situation-specific simulations of the relevant category dynamically, where a simulation attempts to reenact the kind of neural and bodily states associated with processing the category. On needing conceptual knowledge about bicycles, for example, a small subset of the distributed bicycle network in the brain becomes active to simulate what it would be like to interact with an actual bicycle. This multimodal simulation provides anticipatory inferences about what is likely to be perceived further for the bicycle in the current situation, how to interact with it effectively, and what sorts of internal states might result (Barsalou 2009). The specific bicycle simulation that becomes active is one of infinitely many simulations that could be constructed dynamically from the bicycle network—the entire network never becomes fully active. Typically, simulations remain unconscious, at least to a large extent, while causally influencing cognition, affect, and 10 L.W. Barsalou", "title": "" }, { "docid": "cade9bc367068728bde84df622034b46", "text": "Authentication is an important topic in cloud computing security. That is why various authentication techniques in cloud environment are presented in this paper. This process serves as a protection against different sorts of attacks where the goal is to confirm the identity of a user and the user requests services from cloud servers. Multiple authentication technologies have been put forward so far that confirm user identity before giving the permit to access resources. Each of these technologies (username and password, multi-factor authentication, mobile trusted module, public key infrastructure, single sign-on, and biometric authentication) is at first described in here. The different techniques presented will then be compared. Keywords— Cloud computing, security, authentication, access control,", "title": "" }, { "docid": "f022871509e863f6379d76ba80afaa2f", "text": "Neuroeconomics seeks to gain a greater understanding of decision making by combining theoretical and methodological principles from the fields of psychology, economics, and neuroscience. Initial studies using this multidisciplinary approach have found evidence suggesting that the brain may be employing multiple levels of processing when making decisions, and this notion is consistent with dual-processing theories that have received extensive theoretical consideration in the field of cognitive psychology, with these theories arguing for the dissociation between automatic and controlled components of processing. While behavioral studies provide compelling support for the distinction between automatic and controlled processing in judgment and decision making, less is known if these components have a corresponding neural substrate, with some researchers arguing that there is no evidence suggesting a distinct neural basis. This chapter will discuss the behavioral evidence supporting the dissociation between automatic and controlled processing in decision making and review recent literature suggesting potential neural systems that may underlie these processes.", "title": "" }, { "docid": "f08b294c1107372d81c39f13ee2caa34", "text": "The success of deep learning methodologies draws a huge attention to their applications in medical image analysis. One of the applications of deep learning is in segmentation of retinal vessel and severity classification of diabetic retinopathy (DR) from retinal funduscopic image. This paper studies U-Net model performance in segmenting retinal vessel with different settings of dropout and batch normalization and use it to investigate the effect of retina vessel in DR classification. Pre-trained Inception V1 network was used to classify the DR severity. Two sets of retinal images, with and without the presence of vessel, were created from MESSIDOR dataset. The vessel extraction process was done using the best trained U-Net on DRIVE dataset. Final analysis showed that retinal vessel is a good feature in classifying both severe and early cases of DR stage.", "title": "" }, { "docid": "950d7d10b09f5d13e09692b2a4576c00", "text": "Prebiotics, as currently conceived of, are all carbohydrates of relatively short chain length. To be effective they must reach the cecum. Present evidence concerning the 2 most studied prebiotics, fructooligosaccharides and inulin, is consistent with their resisting digestion by gastric acid and pancreatic enzymes in vivo. However, the wide variety of new candidate prebiotics becoming available for human use requires that a manageable set of in vitro tests be agreed on so that their nondigestibility and fermentability can be established without recourse to human studies in every case. In the large intestine, prebiotics, in addition to their selective effects on bifidobacteria and lactobacilli, influence many aspects of bowel function through fermentation. Short-chain fatty acids are a major product of prebiotic breakdown, but as yet, no characteristic pattern of fermentation acids has been identified. Through stimulation of bacterial growth and fermentation, prebiotics affect bowel habit and are mildly laxative. Perhaps more importantly, some are a potent source of hydrogen in the gut. Mild flatulence is frequently observed by subjects being fed prebiotics; in a significant number of subjects it is severe enough to be unacceptable and to discourage consumption. Prebiotics are like other carbohydrates that reach the cecum, such as nonstarch polysaccharides, sugar alcohols, and resistant starch, in being substrates for fermentation. They are, however, distinctive in their selective effect on the microflora and their propensity to produce flatulence.", "title": "" }, { "docid": "4922c751dded99ca83e19d51eb5d647e", "text": "The viewpoint consistency constraint requires that the locations of all object features in an image must be consistent with projection from a single viewpoint. The application of this constraint is central to the problem of achieving robust recognition, since it allows the spatial information in an image to be compared with prior knowledge of an object's shape to the full degree of available image resolution. In addition, the constraint greatly reduces the size of the search space during model-based matching by allowing a few initial matches to provide tight constraints for the locations of other model features. Unfortunately, while simple to state, this constraint has seldom been effectively applied in model-based computer vision systems. This paper reviews the history of attempts to make use of the viewpoint consistency constraint and then describes a number of new techniques for applying it to the process of model-based recognition. A method is presented for probabilistically evaluating new potential matches to extend and refine an initial viewpoint estimate. This evaluation allows the model-based verification process to proceed without the expense of backtracking or search. It will be shown that the effective application of the viewpoint consistency constraint, in conjunction with bottom-up image description based upon principles of perceptual organization, can lead to robust three-dimensional object recognition from single gray-scale images.", "title": "" }, { "docid": "7bb17491cb10db67db09bc98aba71391", "text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.", "title": "" }, { "docid": "e56af4a3a8fbef80493d77b441ee1970", "text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.", "title": "" } ]
scidocsrr
75e47e330359d1afc684d4cd17beae29
Depth camera tracking with contour cues
[ { "docid": "1782fc75827937c6b31951bfca997f48", "text": "Registering 2 or more range scans is a fundamental problem, with application to 3D modeling. While this problem is well addressed by existing techniques such as ICP when the views overlap significantly at a good initialization, no satisfactory solution exists for wide baseline registration. We propose here a novel approach which leverages contour coherence and allows us to align two wide baseline range scans with limited overlap from a poor initialization. Inspired by ICP, we maximize the contour coherence by building robust corresponding pairs on apparent contours and minimizing their distances in an iterative fashion. We use the contour coherence under a multi-view rigid registration framework, and this enables the reconstruction of accurate and complete 3D models from as few as 4 frames. We further extend it to handle articulations, and this allows us to model articulated objects such as human body. Experimental results on both synthetic and real data demonstrate the effectiveness and robustness of our contour coherence based registration approach to wide baseline range scans, and to 3D modeling.", "title": "" }, { "docid": "c64d5309c8f1e2254144215377b366b1", "text": "Since the initial comparison of Seitz et al. [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al. [59], showing the results to compare more than favorably with the current state-of-the-art methods.", "title": "" }, { "docid": "5dac8ef81c7a6c508c603b3fd6a87581", "text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.", "title": "" } ]
[ { "docid": "6bce7698f908721da38a3c6e6916a30e", "text": "For learning in big datasets, the classification performance of ELM might be low due to input samples are not extracted features properly. To address this problem, the hierarchical extreme learning machine (H-ELM) framework was proposed based on the hierarchical learning architecture of multilayer perceptron. H-ELM composes of two parts; the first is the unsupervised multilayer encoding part and the second part is the supervised feature classification part. H-ELM can give higher accuracy rate than of the traditional ELM. However, it still has to enhance its classification performance. Therefore, this paper proposes a new method namely as the extending hierarchical extreme learning machine (EH-ELM). For the extended supervisor part of EH-ELM, we have got an idea from the two-layers extreme learning machine. To evaluate the performance of EH-ELM, three different image datasets; Semeion, MNIST, and NORB, were studied. The experimental results show that EH-ELM achieves better performance than of H-ELM and the other multi-layer framework.", "title": "" }, { "docid": "dfc51ea36992f8afccfbf625e3016054", "text": "Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.", "title": "" }, { "docid": "3f95493016925d4f4a8a0d0a1bc8dc9d", "text": "A consequent pole, dual rotor, axial flux vernier permanent magnet (VPM) machine is developed to reduce magnet usage and increase torque density. Its end winding length is much shorter than that of regular VPM machines due to its toroidal winding configuration. The configurations and features of the proposed machine are discussed. Benefited from its vernier and consequent pole structure, this new machine exhibits much higher back-EMF and torque density than that of a regular dual rotor axial flux machine, while the magnet usage is halved. The influence of main design parameters, such as slot opening, ratio of inner to outer stator diameter, magnet thickness etc., on torque performance is analyzed based on the quasi-3-dimensional (quasi-3D) finite element analysis (FEA). The analyzing results are validated by real 3D FEA.", "title": "" }, { "docid": "43baeb87f1798d52399ba8c78ffa7fef", "text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-", "title": "" }, { "docid": "e456ab6399ad84b575737d2a91597fdc", "text": "In the last two decades, number of Higher Education Institutions (HEI) grows rapidly in India. Since most of the institutions are opened in private mode therefore, a cut throat competition rises among these institutions while attracting the student to got admission. This is the reason for institutions to focus on the strength of students not on the quality of education. This paper presents a data mining application to generate predictive models for engineering student’s dropout management. Given new records of incoming students, the predictive model can produce short accurate prediction list identifying students who tend to need the support from the student dropout program most. The results show that the machine learning algorithm is able to establish effective predictive model from the existing student dropout data. Keywords– Data Mining, Machine Learning Algorithms, Dropout Management and Predictive Models", "title": "" }, { "docid": "4ee62d81dcdf6e1dc9b06757668e0fc8", "text": "The frequent and protracted use of video games with serious personal, family and social consequences is no longer just a pleasant pastime and could lead to mental and physical health problems. Although there is no official recognition of video game addiction on the Internet as a mild mental health disorder, further scientific research is needed.", "title": "" }, { "docid": "bc018ef7cbcf7fc032fe8556016d08b1", "text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.", "title": "" }, { "docid": "089808010a2925a7eaca71736fbabcaf", "text": "In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of images, the global motion can be described by independent motion models. On the other hand, in a sequence there exist as many as \u000e pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fitting a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (ie. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.", "title": "" }, { "docid": "3327a70849d7331bb1db01d99a3d0000", "text": "Queueing network models have proved to be cost effectwe tools for analyzing modern computer systems. This tutorial paper presents the basic results using the operational approach, a framework which allows the analyst to test whether each assumption is met in a given system. The early sections describe the nature of queueing network models and their apphcations for calculating and predicting performance quantitms The basic performance quantities--such as utilizations, mean queue lengths, and mean response tunes--are defined, and operatmnal relationships among them are derwed Following this, the concept of job flow balance is introduced and used to study asymptotic throughputs and response tunes. The concepts of state transition balance, one-step behavior, and homogeneity are then used to relate the proportions of time that each system state is occupied to the parameters of job demand and to dewce charactenstms Efficmnt methods for computing basic performance quantities are also described. Finally the concept of decomposition is used to stmphfy analyses by replacing subsystems with equivalent devices. All concepts are illustrated liberally with examples", "title": "" }, { "docid": "100c62f22feea14ac54c21408432c371", "text": "Modern approach to the FOREX currency exchange market requires support from the computer algorithms to manage huge volumes of the transactions and to find opportunities in a vast number of currency pairs traded daily. There are many well known techniques used by market participants on both FOREX and stock-exchange markets (i.e. Fundamental and technical analysis) but nowadays AI based techniques seem to play key role in the automated transaction and decision supporting systems. This paper presents the comprehensive analysis over Feed Forward Multilayer Perceptron (ANN) parameters and their impact to accurately forecast FOREX trend of the selected currency pair. The goal of this paper is to provide information on how to construct an ANN with particular respect to its parameters and training method to obtain the best possible forecasting capabilities. The ANN parameters investigated in this paper include: number of hidden layers, number of neurons in hidden layers, use of constant/bias neurons, activation functions, but also reviews the impact of the training methods in the process of the creating reliable and valuable ANN, useful to predict the market trends. The experimental part has been performed on the historical data of the EUR/USD pair.", "title": "" }, { "docid": "49575576bc5a0b949c81b0275cbc5f41", "text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.", "title": "" }, { "docid": "b21135f6c627d7dfd95ad68c9fc9cc48", "text": "New mothers can experience social exclusion, particularly during the early weeks when infants are solely dependent on their mothers. We used ethnographic methods to investigate whether technology plays a role in supporting new mothers. Our research identified two core themes: (1) the need to improve confidence as a mother; and (2) the need to be more than \\'18just' a mother. We reflect on these findings both in terms of those interested in designing applications and services for motherhood and also the wider CHI community.", "title": "" }, { "docid": "34c9a0b4f4fdf3d4ef0fbb97e750754b", "text": "Plants are affected by complex genome×environment×management interactions which determine phenotypic plasticity as a result of the variability of genetic components. Whereas great advances have been made in the cost-efficient and high-throughput analyses of genetic information and non-invasive phenotyping, the large-scale analyses of the underlying physiological mechanisms lag behind. The external phenotype is determined by the sum of the complex interactions of metabolic pathways and intracellular regulatory networks that is reflected in an internal, physiological, and biochemical phenotype. These various scales of dynamic physiological responses need to be considered, and genotyping and external phenotyping should be linked to the physiology at the cellular and tissue level. A high-dimensional physiological phenotyping across scales is needed that integrates the precise characterization of the internal phenotype into high-throughput phenotyping of whole plants and canopies. By this means, complex traits can be broken down into individual components of physiological traits. Since the higher resolution of physiological phenotyping by 'wet chemistry' is inherently limited in throughput, high-throughput non-invasive phenotyping needs to be validated and verified across scales to be used as proxy for the underlying processes. Armed with this interdisciplinary and multidimensional phenomics approach, plant physiology, non-invasive phenotyping, and functional genomics will complement each other, ultimately enabling the in silico assessment of responses under defined environments with advanced crop models. This will allow generation of robust physiological predictors also for complex traits to bridge the knowledge gap between genotype and phenotype for applications in breeding, precision farming, and basic research.", "title": "" }, { "docid": "fcca051539729b005271e4f96563538d", "text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.", "title": "" }, { "docid": "26ee1e5770a77d030b6230b8eef7e644", "text": "We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.", "title": "" }, { "docid": "75233d6d94fec1f43fa02e8043470d4d", "text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.", "title": "" }, { "docid": "3d95e2db34f0b1f999833946a173de3d", "text": "Due to the rapid development of mobile social networks, mobile big data play an important role in providing mobile social users with various mobile services. However, as mobile big data have inherent properties, current MSNs face a challenge to provide mobile social user with a satisfactory quality of experience. Therefore, in this article, we propose a novel framework to deliver mobile big data over content- centric mobile social networks. At first, the characteristics and challenges of mobile big data are studied. Then the content-centric network architecture to deliver mobile big data in MSNs is presented, where each datum consists of interest packets and data packets, respectively. Next, how to select the agent node to forward interest packets and the relay node to transmit data packets are given by defining priorities of interest packets and data packets. Finally, simulation results show the performance of our framework with varied parameters.", "title": "" }, { "docid": "7eed84f959268599e1b724b0752f6aa5", "text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.", "title": "" }, { "docid": "c699fc9a25183e998aa5cdebac1c0a43", "text": "DNN-based cross-modal retrieval is a research hotspot to retrieve across different modalities as image and text, but existing methods often face the challenge of insufficient cross-modal training data. In single-modal scenario, similar problem is usually relieved by transferring knowledge from largescale auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is also very useful for cross-modal retrieval, which can provide rich general semantic information that can be shared across different modalities. However, it is challenging to transfer useful knowledge from single-modal (as image) source domain to cross-modal (as image/text) target domain. Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal retrieval which should be preserved during transfer process. This paper proposes Cross-modal Hybrid Transfer Network (CHTN) with two subnetworks: Modalsharing transfer subnetwork utilizes the modality in both source and target domains as a bridge, for transferring knowledge to both two modalities simultaneously; Layer-sharing correlation subnetwork preserves the inherent cross-modal semantic correlation to further adapt to cross-modal retrieval task. Cross-modal data can be converted to common representation by CHTN for retrieval, and comprehensive experiments on 3 datasets show its effectiveness.", "title": "" }, { "docid": "5ab8a8f4991f7c701c51e32de7f97b36", "text": "Recent breakthroughs in computational capabilities and optimization algorithms have enabled a new class of signal processing approaches based on deep neural networks (DNNs). These algorithms have been extremely successful in the classification of natural images, audio, and text data. In particular, a special type of DNNs, called convolutional neural networks (CNNs) have recently shown superior performance for object recognition in image processing applications. This paper discusses modern training approaches adopted from the image processing literature and shows how those approaches enable significantly improved performance for synthetic aperture radar (SAR) automatic target recognition (ATR). In particular, we show how a set of novel enhancements to the learning algorithm, based on new stochastic gradient descent approaches, generate significant classification improvement over previously published results on a standard dataset called MSTAR.", "title": "" } ]
scidocsrr
e461a00ceb5f8937f05bf68665b57ec8
Rumor Identification and Belief Investigation on Twitter
[ { "docid": "0c886080015642aa5b7c103adcd2a81d", "text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.", "title": "" }, { "docid": "860894abbbafdcb71178cb9ddd173970", "text": "Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.", "title": "" } ]
[ { "docid": "45390290974f347d559cd7e28c33c993", "text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.", "title": "" }, { "docid": "0c67628fb24c8cbd4a8e49fb30ba625e", "text": "Modeling the evolution of topics with time is of great value in automatic summarization and analysis of large document collections. In this work, we propose a new probabilistic graphical model to address this issue. The new model, which we call the Multiscale Topic Tomography Model (MTTM), employs non-homogeneous Poisson processes to model generation of word-counts. The evolution of topics is modeled through a multi-scale analysis using Haar wavelets. One of the new features of the model is its modeling the evolution of topics at various time-scales of resolution, allowing the user to zoom in and out of the time-scales. Our experiments on Science data using the new model uncovers some interesting patterns in topics. The new model is also comparable to LDA in predicting unseen data as demonstrated by our perplexity experiments.", "title": "" }, { "docid": "fc62e84fc995deb1932b12821dfc0ada", "text": "As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.", "title": "" }, { "docid": "e4405c71336ea13ccbd43aa84651dc60", "text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.", "title": "" }, { "docid": "bc30cd034185df96d20174b9719f3177", "text": "Toxicity in online environments is a complex and a systemic issue. Esports communities seem to be particularly suffering from toxic behaviors. Especially in competitive esports games, negative behavior, such as harassment, can create barriers to players achieving high performance and can reduce players' enjoyment which may cause them to leave the game. The aim of this study is to review design approaches in six major esports games to deal with toxic behaviors and to investigate how players perceive and deal with toxicity in those games. Our preliminary findings from an interview study with 17 participants (3 female) from a university esports club show that players define toxicity as behaviors disrupt their morale and team dynamics, and participants are inclined to normalize negative behaviors and rationalize it as part of the competitive game culture. If they choose to take an action against toxic players, they are likely to ostracize toxic players.", "title": "" }, { "docid": "cbe1dc1b56716f57fca0977383e35482", "text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.", "title": "" }, { "docid": "6a8ac2a2786371dcb043d92fa522b726", "text": "We propose a modular reinforcement learning algorithm which decomposes a Markov decision process into independent modules. Each module is trained using Sarsa(λ). We introduce three algorithms for forming global policy from modules policies, and demonstrate our results using a 2D grid world.", "title": "" }, { "docid": "f264d5b90dfb774e9ec2ad055c4ebe62", "text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.", "title": "" }, { "docid": "57d162c64d93b28f6be1e086b5a1c134", "text": "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.", "title": "" }, { "docid": "2d7892534b0e279a426e3fdbc3849454", "text": "What do we see when we glance at a natural scene and how does it change as the glance becomes longer? We asked naive subjects to report in a free-form format what they saw when looking at briefly presented real-life photographs. Our subjects received no specific information as to the content of each stimulus. Thus, our paradigm differs from previous studies where subjects were cued before a picture was presented and/or were probed with multiple-choice questions. In the first stage, 90 novel grayscale photographs were foveally shown to a group of 22 native-English-speaking subjects. The presentation time was chosen at random from a set of seven possible times (from 27 to 500 ms). A perceptual mask followed each photograph immediately. After each presentation, subjects reported what they had just seen as completely and truthfully as possible. In the second stage, another group of naive individuals was instructed to score each of the descriptions produced by the subjects in the first stage. Individual scores were assigned to more than a hundred different attributes. We show that within a single glance, much object- and scene-level information is perceived by human subjects. The richness of our perception, though, seems asymmetrical. Subjects tend to have a propensity toward perceiving natural scenes as being outdoor rather than indoor. The reporting of sensory- or feature-level information of a scene (such as shading and shape) consistently precedes the reporting of the semantic-level information. But once subjects recognize more semantic-level components of a scene, there is little evidence suggesting any bias toward either scene-level or object-level recognition.", "title": "" }, { "docid": "5cc26542d0f4602b2b257e19443839b3", "text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.", "title": "" }, { "docid": "4704f3ed7a5d5d9b244689019025730f", "text": "To address the need for fundamental universally valid definitions of exact bandwidth and quality factor (Q) of tuned antennas, as well as the need for efficient accurate approximate formulas for computing this bandwidth and Q, exact and approximate expressions are found for the bandwidth and Q of a general single-feed (one-port) lossy or lossless linear antenna tuned to resonance or antiresonance. The approximate expression derived for the exact bandwidth of a tuned antenna differs from previous approximate expressions in that it is inversely proportional to the magnitude |Z'/sub 0/(/spl omega//sub 0/)| of the frequency derivative of the input impedance and, for not too large a bandwidth, it is nearly equal to the exact bandwidth of the tuned antenna at every frequency /spl omega//sub 0/, that is, throughout antiresonant as well as resonant frequency bands. It is also shown that an appropriately defined exact Q of a tuned lossy or lossless antenna is approximately proportional to |Z'/sub 0/(/spl omega//sub 0/)| and thus this Q is approximately inversely proportional to the bandwidth (for not too large a bandwidth) of a simply tuned antenna at all frequencies. The exact Q of a tuned antenna is defined in terms of average internal energies that emerge naturally from Maxwell's equations applied to the tuned antenna. These internal energies, which are similar but not identical to previously defined quality-factor energies, and the associated Q are proven to increase without bound as the size of an antenna is decreased. Numerical solutions to thin straight-wire and wire-loop lossy and lossless antennas, as well as to a Yagi antenna and a straight-wire antenna embedded in a lossy dispersive dielectric, confirm the accuracy of the approximate expressions and the inverse relationship between the defined bandwidth and the defined Q over frequency ranges that cover several resonant and antiresonant frequency bands.", "title": "" }, { "docid": "82917c4e6fb56587cc395078c14f3bb7", "text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.", "title": "" }, { "docid": "70d874f2f919c6749c4105f35776532b", "text": "Effective processing of extremely large volumes of spatial data has led to many organizations employing distributed processing frameworks. Hadoop is one such open-source framework that is enjoying widespread adoption. In this paper, we detail an approach to indexing and performing key analytics on spatial data that is persisted in HDFS. Our technique differs from other approaches in that it combines spatial indexing, data load balancing, and data clustering in order to optimize performance across the cluster. In addition, our index supports efficient, random-access queries without requiring a MapReduce job; neither a full table scan, nor any MapReduce overhead is incurred when searching. This facilitates large numbers of concurrent query executions. We will also demonstrate how indexing and clustering positively impacts the performance of range and k-NN queries on large real-world datasets. The performance analysis will enable a number of interesting observations to be made on the behavior of spatial indexes and spatial queries in this distributed processing environment.", "title": "" }, { "docid": "b7d13c090e6d61272f45b1e3090f0341", "text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.", "title": "" }, { "docid": "865d7b8fae1cab739570229889177d58", "text": "This paper presents design and implementation of scalar control of induction motor. This method leads to be able to adjust the speed of the motor by control the frequency and amplitude of the stator voltage of induction motor, the ratio of stator voltage to frequency should be kept constant, which is called as V/F or scalar control of induction motor drive. This paper presents a comparative study of open loop and close loop V/F control induction motor. The V/F", "title": "" }, { "docid": "1f7fb5da093f0f0b69b1cc368cea0701", "text": "This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on \"what\" and \"where\" channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.", "title": "" }, { "docid": "86f25f09b801d28ce32f1257a39ddd44", "text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "title": "" }, { "docid": "8fc87a5f89792b3ea69833dcae90cd6e", "text": "The Joint Conference on Lexical and Computational Semantics (*SEM) each year hosts a shared task on semantic related topics. In its first edition held in 2012, the shared task was dedicated to resolving the scope and focus of negation. This paper presents the specifications, datasets and evaluation criteria of the task. An overview of participating systems is provided and their results are summarized.", "title": "" }, { "docid": "1c2cc1120129eca44443a637c0f06729", "text": "Direct volume rendering (DVR) is of increasing diagnostic value in the analysis of data sets captured using the latest medical imaging modalities. The deployment of DVR in everyday clinical work, however, has so far been limited. One contributing factor is that current transfer function (TF) models can encode only a small fraction of the user's domain knowledge. In this paper, we use histograms of local neighborhoods to capture tissue characteristics. This allows domain knowledge on spatial relations in the data set to be integrated into the TF. As a first example, we introduce partial range histograms in an automatic tissue detection scheme and present its effectiveness in a clinical evaluation. We then use local histogram analysis to perform a classification where the tissue-type certainty is treated as a second TF dimension. The result is an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF", "title": "" } ]
scidocsrr
aa0535d08ed619450e3cdee0e847b806
Approximate Note Transcription for the Improved Identification of Difficult Chords
[ { "docid": "e8933b0afcd695e492d5ddd9f87aeb81", "text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.", "title": "" } ]
[ { "docid": "1f1c4c69a4c366614f0cc9ecc24365ba", "text": "BACKGROUND\nBurnout is a major issue among medical students. Its general characteristics are loss of interest in study and lack of motivation. A study of the phenomenon must extend beyond the university environment and personality factors to consider whether career choice has a role in the occurrence of burnout.\n\n\nMETHODS\nQuantitative, national survey (n = 733) among medical students, using a 12-item career motivation list compiled from published research results and a pilot study. We measured burnout by the validated Hungarian version of MBI-SS.\n\n\nRESULTS\nThe most significant career choice factor was altruistic motivation, followed by extrinsic motivations: gaining a degree, finding a job, accessing career opportunities. Lack of altruism was found to be a major risk factor, in addition to the traditional risk factors, for cynicism and reduced academic efficacy. Our study confirmed the influence of gender differences on both career choice motivations and burnout.\n\n\nCONCLUSION\nThe structure of career motivation is a major issue in the transformation of the medical profession. Since altruism is a prominent motivation for many women studying medicine, their entry into the profession in increasing numbers may reinforce its traditional character and act against the present trend of deprofessionalization.", "title": "" }, { "docid": "b2cd02622ec0fc29b54e567c7f10a935", "text": "Performance and high availability have become increasingly important drivers, amongst other drivers, for user retention in the context of web services such as social networks, and web search. Exogenic and/or endogenic factors often give rise to anomalies, making it very challenging to maintain high availability, while also delivering high performance. Given that service-oriented architectures (SOA) typically have a large number of services, with each service having a large set of metrics, automatic detection of anomalies is nontrivial. Although there exists a large body of prior research in anomaly detection, existing techniques are not applicable in the context of social network data, owing to the inherent seasonal and trend components in the time series data. To this end, we developed two novel statistical techniques for automatically detecting anomalies in cloud infrastructure data. Specifically, the techniques employ statistical learning to detect anomalies in both application, and system metrics. Seasonal decomposition is employed to filter the trend and seasonal components of the time series, followed by the use of robust statistical metrics – median and median absolute deviation (MAD) – to accurately detect anomalies, even in the presence of seasonal spikes. We demonstrate the efficacy of the proposed techniques from three different perspectives, viz., capacity planning, user behavior, and supervised learning. In particular, we used production data for evaluation, and we report Precision, Recall, and F-measure in each case.", "title": "" }, { "docid": "69d7ec6fe0f847cebe3d1d0ae721c950", "text": "Circularly polarized (CP) dielectric resonator antenna (DRA) subarrays have been numerically studied and experimentally verified. Elliptical CP DRA is used as the antenna element, which is excited by either a narrow slot or a probe. The elements are arranged in a 2 by 2 subarray configuration and are excited sequentially. In order to optimize the CP bandwidth, wideband feeding networks have been designed. Three different types of feeding network are studied; they are parallel feeding network, series feeding network and hybrid ring feeding network. For the CP DRA subarray with hybrid ring feeding network, the impedance matching bandwidth (S11<-10 dB) and 3-dB AR bandwidth achieved are 44% and 26% respectively", "title": "" }, { "docid": "7d603d154025f7160c0711bba92e1049", "text": "Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.", "title": "" }, { "docid": "d131f4f22826a2083d35dfa96bf2206b", "text": "The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.", "title": "" }, { "docid": "d59e21319b9915c2f6d7a8931af5503c", "text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.", "title": "" }, { "docid": "959a43b6b851a4a255466296efac7299", "text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.", "title": "" }, { "docid": "bc3924d12ee9d07a752fce80a67bb438", "text": "Unsupervised semantic segmentation in the time series domain is a much-studied problem due to its potential to detect unexpected regularities and regimes in poorly understood data. However, the current techniques have several shortcomings, which have limited the adoption of time series semantic segmentation beyond academic settings for three primary reasons. First, most methods require setting/learning many parameters and thus may have problems generalizing to novel situations. Second, most methods implicitly assume that all the data is segmentable, and have difficulty when that assumption is unwarranted. Finally, most research efforts have been confined to the batch case, but online segmentation is clearly more useful and actionable. To address these issues, we present an algorithm which is domain agnostic, has only one easily determined parameter, and can handle data streaming at a high rate. In this context, we test our algorithm on the largest and most diverse collection of time series datasets ever considered, and demonstrate our algorithm's superiority over current solutions. Furthermore, we are the first to show that semantic segmentation may be possible at superhuman performance levels.", "title": "" }, { "docid": "5b89c42eb7681aff070448bc22e501ea", "text": "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.", "title": "" }, { "docid": "f0dbe9ad4934ff4d5857cfc5a876bcb6", "text": "Although pricing fraud is an important issue for improving service quality of online shopping malls, research on automatic fraud detection has been limited. In this paper, we propose an unsupervised learning method based on a finite mixture model to identify pricing frauds. We consider two states, normal and fraud, for each item according to whether an item description is relevant to its price by utilizing the known number of item clusters. Two states of an observed item are modeled as hidden variables, and the proposed models estimate the state by using an expectation maximization (EM) algorithm. Subsequently, we suggest a special case of the proposed model, which is applicable when the number of item clusters is unknown. The experiment results show that the proposed models are more effective in identifying pricing frauds than the existing outlier detection methods. Furthermore, it is presented that utilizing the number of clusters is helpful in facilitating the improvement of pricing fraud detection", "title": "" }, { "docid": "49e91d22adb0cdeb014b8330e31f226d", "text": "Ghrelin increases non-REM sleep and decreases REM sleep in young men but does not affect sleep in young women. In both sexes, ghrelin stimulates the activity of the somatotropic and the hypothalamic-pituitary-adrenal (HPA) axis, as indicated by increased growth hormone (GH) and cortisol plasma levels. These two endocrine axes are crucially involved in sleep regulation. As various endocrine effects are age-dependent, aim was to study ghrelin's effect on sleep and secretion of GH and cortisol in elderly humans. Sleep-EEGs (2300-0700 h) and secretion profiles of GH and cortisol (2000-0700 h) were determined in 10 elderly men (64.0+/-2.2 years) and 10 elderly, postmenopausal women (63.0+/-2.9 years) twice, receiving 50 microg ghrelin or placebo at 2200, 2300, 0000, and 0100 h, in this single-blind, randomized, cross-over study. In men, ghrelin compared to placebo was associated with significantly more stage 2 sleep (placebo: 183.3+/-6.1; ghrelin: 221.0+/-12.2 min), slow wave sleep (placebo: 33.4+/-5.1; ghrelin: 44.3+/-7.7 min) and non-REM sleep (placebo: 272.6+/-12.8; ghrelin: 318.2+/-11.0 min). Stage 1 sleep (placebo: 56.9+/-8.7; ghrelin: 50.9+/-7.6 min) and REM sleep (placebo: 71.9+/-9.1; ghrelin: 52.5+/-5.9 min) were significantly reduced. Furthermore, delta power in men was significantly higher and alpha power and beta power were significantly lower after ghrelin than after placebo injection during the first half of night. In women, no effects on sleep were observed. In both sexes, ghrelin caused comparable increases and secretion patterns of GH and cortisol. In conclusion, ghrelin affects sleep in elderly men but not women resembling findings in young subjects.", "title": "" }, { "docid": "9f5ab2f666eb801d4839fcf8f0293ceb", "text": "In recent years, Wireless Sensor Networks (WSNs) have emerged as a new powerful technology used in many applications such as military operations, surveillance system, Intelligent Transport Systems (ITS) etc. These networks consist of many Sensor Nodes (SNs), which are not only used for monitoring but also capturing the required data from the environment. Most of the research proposals on WSNs have been developed keeping in view of minimization of energy during the process of extracting the essential data from the environment where SNs are deployed. The primary reason for this is the fact that the SNs are operated on battery which discharges quickly after each operation. It has been found in literature that clustering is the most common technique used for energy aware routing in WSNs. The most popular protocol for clustering in WSNs is Low Energy Adaptive Clustering Hierarchy (LEACH) which is based on adaptive clustering technique. This paper provides the taxonomy of various clustering and routing techniques in WSNs based upon metrics such as power management, energy management, network lifetime, optimal cluster head selection, multihop data transmission etc. A comprehensive discussion is provided in the text highlighting the relative advantages and disadvantages of many of the prominent proposals in this category which helps the designers to select a particular proposal based upon its merits over the others. & 2012 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a7e6a2145b9ae7ca2801a3df01f42f5e", "text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.", "title": "" }, { "docid": "c3539090bef61fcfe4a194058a61d381", "text": "Real-time environment monitoring and analysis is an important research area of Internet of Things (IoT). Understanding the behavior of the complex ecosystem requires analysis of detailed observations of an environment over a range of different conditions. One such example in urban areas includes the study of tree canopy cover over the microclimate environment using heterogeneous sensor data. There are several challenges that need to be addressed, such as obtaining reliable and detailed observations over monitoring area, detecting unusual events from data, and visualizing events in real-time in a way that is easily understandable by the end users (e.g., city councils). In this regard, we propose an integrated geovisualization framework, built for real-time wireless sensor network data on the synergy of computational intelligence and visual methods, to analyze complex patterns of urban microclimate. A Bayesian maximum entropy-based method and a hyperellipsoidal model-based algorithm have been build in our integrated framework to address above challenges. The proposed integrated framework was verified using the dataset from an indoor and two outdoor network of IoT devices deployed at two strategically selected locations in Melbourne, Australia. The data from these deployments are used for evaluation and demonstration of these components’ functionality along with the designed interactive visualization components.", "title": "" }, { "docid": "e2134985f8067efe41935adff8ef2150", "text": "In this paper, a high efficiency and high power factor single-stage balanced forward-flyback converter merging a foward and flyback converter topologies is proposed. The conventional AC/DC flyback converter can achieve a good power factor but it has a high offset current through the transformer magnetizing inductor, which results in a large core loss and low power conversion efficiency. And, the conventional forward converter can achieve the good power conversion efficiency with the aid of the low core loss but the input current dead zone near zero cross AC input voltage deteriorates the power factor. On the other hand, since the proposed converter can operate as the forward and flyback converters during switch on and off periods, respectively, it cannot only perform the power transfer during an entire switching period but also achieve the high power factor due to the flyback operation. Moreover, since the current balanced capacitor can minimize the offset current through the transformer magnetizing inductor regardless of the AC input voltage, the core loss and volume of the transformer can be minimized. Therefore, the proposed converter features a high efficiency and high power factor. To confirm the validity of the proposed converter, theoretical analysis and experimental results from a prototype of 24W LED driver are presented.", "title": "" }, { "docid": "956be237e0b6e7bafbf774d56a8841d2", "text": "Wireless sensor networks (WSNs) will play an active role in the 21th Century Healthcare IT to reduce the healthcare cost and improve the quality of care. The protection of data confidentiality and patient privacy are the most critical requirements for the ubiquitous use of WSNs in healthcare environments. This requires a secure and lightweight user authentication and access control. Symmetric key based access control is not suitable for WSNs in healthcare due to dynamic network topology, mobility, and stringent resource constraints. In this paper, we propose a secure, lightweight public key based security scheme, Mutual Authentication and Access Control based on Elliptic curve cryptography (MAACE). MAACE is a mutual authentication protocol where a healthcare professional can authenticate to an accessed node (a PDA or medical sensor) and vice versa. This is to ensure that medical data is not exposed to an unauthorized person. On the other hand, it ensures that medical data sent to healthcare professionals did not originate from a malicious node. MAACE is more scalable and requires less memory compared to symmetric key-based schemes. Furthermore, it is much more lightweight than other public key-based schemes. Security analysis and performance evaluation results are presented and compared to existing schemes to show advantages of the proposed scheme.", "title": "" }, { "docid": "8b641a8f504b550e1eed0dca54bfbe04", "text": "Overlay architectures are programmable logic systems that are compiled on top of a traditional FPGA. These architectures give designers flexibility, and have a number of benefits, such as being designed or optimized for specific application domains, making it easier or more efficient to implement solutions, being independent of platform, allowing the ability to do partial reconfiguration regardless of the underlying architecture, and allowing compilation without using vendor tools, in some cases with fully open source tool chains. This thesis describes the implementation of two FPGA overlay architectures, ZUMA and CARBON. These overlay implementations include optimizations to reduce area and increase speed which may be applicable to many other FPGAs and also ASIC systems. ZUMA is a fine-grain overlay which resembles a modern commercial FPGA, and is compatible with the VTR open source compilation tools. The implementation includes a number of novel features tailored to efficient FPGA implementation, including the utilization of reprogrammable LUTRAMs, a novel two-stage local routing crossbar, and an area efficient configuration controller. CARBON", "title": "" }, { "docid": "2efb10a430e001acd201a0b16ab74836", "text": "As the cost of human full genome sequencing continues to fall, we will soon witness a prodigious amount of human genomic data in the public cloud. To protect the confidentiality of the genetic information of individuals, the data has to be encrypted at rest. On the other hand, encryption severely hinders the use of this valuable information, such as Genome-wide Range Query (GRQ), in medical/genomic research. While the problem of secure range query on outsourced encrypted data has been extensively studied, the current schemes are far from practical deployment in terms of efficiency and scalability due to the data volume in human genome sequencing. In this paper, we investigate the problem of secure GRQ over human raw aligned genomic data in a third-party outsourcing model. Our solution contains a novel secure range query scheme based on multi-keyword symmetric searchable encryption (MSSE). The proposed scheme incurs minimal ciphertext expansion and computation overhead. We also present a hierarchical GRQ-oriented secure index structure tailored for efficient and large-scale genomic data lookup in the cloud while preserving the query privacy. Our experiment on real human genomic data shows that a secure GRQ request with range size 100,000 over more than 300 million encrypted short reads takes less than 3 minutes, which is orders of magnitude faster than existing solutions.", "title": "" }, { "docid": "b7617b5dd2a6f392f282f6a34f5b6751", "text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.", "title": "" }, { "docid": "c16ff028e77459867eed4c2b9c1f44c6", "text": "Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model (Yu 2013), is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer’s disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts. Introduction Neuroimage analysis is challenging due to its high feature dimensionality and data scarcity. Sparse models such as the lasso (Tibshirani 1996) have gained great reputation in statistics and machine learning, and they have been applied to the analysis of such high dimensional data by exploiting the sparsity property in the absence of abundant data. As a major result, automatic selection of relevant variables/features by such sparse formulation achieves promising performance. For example, in (Liu, Zhang, and Shen 2012), the lasso model was applied to the diagnosis of Alzheimer’s disease (AD) and showed better performance than the support vector machine (SVM), which is one of the state-of-the-arts in brain image classification. However, in statistics, it is known that the lasso does not always provide interpretable results because of its instability (Yu 2013). “Stability” here means the reproducibility of statistical results subject to reasonable perturbations to data and Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the model. (These perturbations include the often used Jacknife, bootstrap and cross-validation.) This unstable behavior of the lasso model is critical in high dimensional data analysis. The resulting irreproducibility of the feature selection are especially undesirable in neuroimage analysis/diagnosis. However, unlike the problems such as registration and classification, the stability issue of feature selection is much less studied in this field. In this paper we propose a model to induce more stable feature selection from high dimensional brain structural Magnetic Resonance Imaging (sMRI) images. Besides sparsity, the proposed model harnesses two important additional pathological priors in brain sMRI: (i) the spatial cohesion of lesion voxels (via inducing fusion terms) and (ii) the positive correlation between the features and the disease labels. The correlation prior is based on the observation that in many brain image analysis problems (such as AD, frontotemporal dementia, corticobasal degeneration, etc), there exist strong correlations between the features and the labels. For example, gray matter of AD is degenerated/atrophied. Therefore, the gray matter values (indicating the volume) are positively correlated with the cognitive scores or disease labels {-1,1}. That is, the less gray matter, the lower the cognitive score. Accordingly, we propose nonnegative constraints on the variables to enforce the prior and name the model as “non-negative Generalized Fused Lasso” (nGFL). It extends the popular generalized fused lasso and enables it to explore the intrinsic structure of data via selecting stable features. To measure feature stability, we introduce the “Estimation Stability” recently proposed in (Yu 2013) and the (multi-set) Dice coefficient (Dice 1945). Experiments demonstrate that compared with existing models, our model selects much more stable (and pathological-prior consistent) voxels. It is worth mentioning that the non-negativeness per se is a very important prior of many practical problems, e.g. (Lee and Seung 1999). Although nGFL is proposed to solve the diagnosis of AD in this work, the model can be applied to more general problems. Incorporating these priors makes the problem novel w.r.t the lasso or generalized fused lasso from an optimization standpoint. Although off-the-shelf convex solvers such as CVX (Grant and Boyd 2013) can be applied to solve the optimization, it hardly scales to high-dimensional problems in feasible time. In this regard, we propose an efficient algoProceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence", "title": "" } ]
scidocsrr
59708a44c0315cb1c93bedc32e054a5f
Supporting Drivers in Keeping Safe Speed and Safe Distance: The SASPENCE Subproject Within the European Framework Programme 6 Integrating Project PReVENT
[ { "docid": "c1956e4c6b732fa6a420d4c69cfbe529", "text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.", "title": "" }, { "docid": "7cef2fac422d9fc3c3ffbc130831b522", "text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.", "title": "" } ]
[ { "docid": "9f3404d9e2941a1e0987f215e6d82a54", "text": "Augmented reality technologies allow people to view and interact with virtual objects that appear alongside physical objects in the real world. For augmented reality applications to be effective, users must be able to accurately perceive the intended real world location of virtual objects. However, when creating augmented reality applications, developers are faced with a variety of design decisions that may affect user perceptions regarding the real world depth of virtual objects. In this paper, we conducted two experiments using a perceptual matching task to understand how shading, cast shadows, aerial perspective, texture, dimensionality (i.e., 2D vs. 3D shapes) and billboarding affected participant perceptions of virtual object depth relative to real world targets. The results of these studies quantify trade-offs across virtual object designs to inform the development of applications that take advantage of users' visual abilities to better blend the physical and virtual world.", "title": "" }, { "docid": "9de44948e28892190f461199a1d33935", "text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data", "title": "" }, { "docid": "5288f4bbc2c9b8531042ce25b8df05b0", "text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.", "title": "" }, { "docid": "4dbbcaf264cc9beda8644fa926932d2e", "text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.", "title": "" }, { "docid": "1510979ae461bfff3deb46dec2a81798", "text": "State-of-the-art methods for relation classification are mostly based on statistical machine learning, and the performance heavily depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing NLP systems, which lead to the error propagation of existing tools and hinder the performance of the system. In this paper, we exploit a convolutional Deep Neural Network (DNN) to extract lexical and sentence level features. Our method takes all the word tokens as input without complicated pre-processing. First, all the word tokens are transformed to vectors by looking up word embeddings1. Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated as the final extracted feature vector. Finally, the features are feed into a softmax classifier to predict the relationship between two marked nouns. Experimental results show that our approach significantly outperforms the state-of-the-art methods.", "title": "" }, { "docid": "1196ab65ddfcedb8775835f2e176576f", "text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.", "title": "" }, { "docid": "d9ad51299d4afb8075bd911b6655cf16", "text": "To assess whether the passive leg raising test can help in predicting fluid responsiveness. Nonsystematic review of the literature. Passive leg raising has been used as an endogenous fluid challenge and tested for predicting the hemodynamic response to fluid in patients with acute circulatory failure. This is now easy to perform at the bedside using methods that allow a real time measurement of systolic blood flow. A passive leg raising induced increase in descending aortic blood flow of at least 10% or in echocardiographic subaortic flow of at least 12% has been shown to predict fluid responsiveness. Importantly, this prediction remains very valuable in patients with cardiac arrhythmias or spontaneous breathing activity. Passive leg raising allows reliable prediction of fluid responsiveness even in patients with spontaneous breathing activity or arrhythmias. This test may come to be used increasingly at the bedside since it is easy to perform and effective, provided that its effects are assessed by a real-time measurement of cardiac output.", "title": "" }, { "docid": "8f9929a21107e6edf9a2aa5f69a3c012", "text": "Rice is one of the most cultivated cereal in Asian countries and Vietnam in particular. Good seed germination is important for rice seed quality, that impacts the rice production and crop yield. Currently, seed germination evaluation is carried out manually by experienced persons. This is a tedious and time-consuming task. In this paper, we present a system for automatic evaluation of rice seed germination rate based on advanced techniques in computer vision and machine learning. We propose to use U-Net - a convolutional neural network - for segmentation and separation of rice seeds. Further processing such as computing distance transform and thresholding will be applied on the segmented regions for rice seed detection. Finally, ResNet is utilized to classify segmented rice seed regions into two classes: germinated and non- germinated seeds. Our contributions in this paper are three-fold. Firstly, we propose a framework which confirms that convolutional neural networks are better than traditional methods for both segmentation and classification tasks (with F1- scores of 93.38\\% and 95.66\\% respectively). Secondly, we deploy successfully the automatic tool in a real application for estimating rice germination rate. Finally, we introduce a new dataset of 1276 images of rice seeds from 7 to 8 seed varieties germinated during 6 to 10 days. This dataset is publicly available for research purpose.", "title": "" }, { "docid": "4d2f03a786f8addf0825b5bc7701c621", "text": "Integrated Design of Agile Missile Guidance and Autopilot Systems By P. K. Menon and E. J. Ohlmeyer Abstract Recent threat assessments by the US Navy have indicated the need for improving the accuracy of defensive missiles. This objective can only be achieved by enhancing the performance of the missile subsystems and by finding methods to exploit the synergism existing between subsystems. As a first step towards the development of integrated design methodologies, this paper develops a technique for integrated design of missile guidance and autopilot systems. Traditional approach for the design of guidance and autopilot systems has been to design these subsystems separately and then to integrate them together before verifying their performance. Such an approach does not exploit any beneficial relationships between these and other subsystems. The application of the feedback linearization technique for integrated guidance-autopilot system design is discussed. Numerical results using a six degree-of-freedom missile simulation are given. Integrated guidance-autopilot systems are expected to result in significant improvements in missile performance, leading to lower weight and enhanced lethality. Both of these factors will lead to a more effective, lower-cost weapon system. Integrated system design methods developed under the present research effort also have extensive applications in high performance aircraft autopilot and guidance systems.", "title": "" }, { "docid": "f5886c4e73fed097e44d6a0e052b143f", "text": "A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach.", "title": "" }, { "docid": "115c06a2e366293850d1ef3d60f2a672", "text": "Accurate network traffic identification plays important roles in many areas such as traffic engineering, QoS and intrusion detection etc. The emergence of many new encrypted applications which use dynamic port numbers and masquerading techniques causes the most challenging problem in network traffic identification field. One of the challenging issues for existing traffic identification methods is that they can’t classify online encrypted traffic. To overcome the drawback of the previous identification scheme and to meet the requirements of the encrypted network activities, our work mainly focuses on how to build an online Internet traffic identification based on flow information. We propose real-time encrypted traffic identification based on flow statistical characteristics using machine learning in this paper. We evaluate the effectiveness of our proposed method through the experiments on different real traffic traces. By experiment results and analysis, this method can classify online encrypted network traffic with high accuracy and robustness.", "title": "" }, { "docid": "c6a7c67fa77d2a5341b8e01c04677058", "text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.", "title": "" }, { "docid": "fe4a59449e61d47ccf04f4ef70a6ba72", "text": "Bipedal robots are currently either slow, energetically inefficient and/or require a lot of control to maintain their stability. This paper introduces the FastRunner, a bipedal robot based on a new leg architecture. Simulation results of a Planar FastRunner demonstrate that legged robots can run fast, be energy efficient and inherently stable. The simulated FastRunner has a cost of transport of 1.4 and requires only a local feedback of the hip position to reach 35.4 kph from stop in simulation.", "title": "" }, { "docid": "c5cc4da2906670c30fc0bac3040217bd", "text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.", "title": "" }, { "docid": "43d88fd321ad7a6c1bbb2a0054a77959", "text": "We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from “bottom-up” visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.", "title": "" }, { "docid": "a6f1d81e6b4a20d892c9292fb86d2c1d", "text": "Research in biomaterials and biomechanics has fueled a large part of the significant revolution associated with osseointegrated implants. Additional key areas that may become even more important--such as guided tissue regeneration, growth factors, and tissue engineering--could not be included in this review because of space limitations. All of this work will no doubt continue unabated; indeed, it is probably even accelerating as more clinical applications are found for implant technology and related therapies. An excellent overall summary of oral biology and dental implants recently appeared in a dedicated issue of Advances in Dental Research. Many advances have been made in the understanding of events at the interface between bone and implants and in developing methods for controlling these events. However, several important questions still remain. What is the relationship between tissue structure, matrix composition, and biomechanical properties of the interface? Do surface modifications alter the interfacial tissue structure and composition and the rate at which it forms? If surface modifications change the initial interface structure and composition, are these changes retained? Do surface modifications enhance biomechanical properties of the interface? As current understanding of the bone-implant interface progresses, so will development of proactive implants that can help promote desired outcomes. However, in the midst of the excitement born out of this activity, it is necessary to remember that the needs of the patient must remain paramount. It is also worth noting another as-yet unsatisfied need. With all of the new developments, continuing education of clinicians in the expert use of all of these research advances is needed. For example, in the area of biomechanical treatment planning, there are still no well-accepted biomaterials/biomechanics \"building codes\" that can be passed on to clinicians. Also, there are no readily available treatment-planning tools that clinicians can use to explore \"what-if\" scenarios and other design calculations of the sort done in modern engineering. No doubt such approaches could be developed based on materials already in the literature, but unfortunately much of what is done now by clinicians remains empirical. A worthwhile task for the future is to find ways to more effectively deliver products of research into the hands of clinicians.", "title": "" }, { "docid": "dbd92d8f7fc050229379ebfb87e22a5f", "text": "The presence of third-party tracking on websites has become customary. However, our understanding of the thirdparty ecosystem is still very rudimentary. We examine thirdparty trackers from a geographical perspective, observing the third-party tracking ecosystem from 29 countries across the globe. When examining the data by region (North America, South America, Europe, East Asia, Middle East, and Oceania), we observe significant geographical variation between regions and countries within regions. We find trackers that focus on specific regions and countries, and some that are hosted in countries outside their expected target tracking domain. Given the differences in regulatory regimes between jurisdictions, we believe this analysis sheds light on the geographical properties of this ecosystem and on the problems that these may pose to our ability to track and manage the different data silos that now store personal data about us all.", "title": "" }, { "docid": "745cdbb442c73316f691dc20cc696f31", "text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.", "title": "" }, { "docid": "313c8ba6d61a160786760543658185df", "text": "In this review, we collate information about ticks identified in different parts of the Sudan and South Sudan since 1956 in order to identify gaps in tick prevalence and create a map of tick distribution. This will avail basic data for further research on ticks and policies for the control of tick-borne diseases. In this review, we discuss the situation in the Republic of South Sudan as well as Sudan. For this purpose we have divided Sudan into four regions, namely northern Sudan (Northern and River Nile states), central Sudan (Khartoum, Gazera, White Nile, Blue Nile and Sennar states), western Sudan (North and South Kordofan and North, South and West Darfour states) and eastern Sudan (Red Sea, Kassala and Gadarif states).", "title": "" }, { "docid": "c9968b5dbe66ad96605c88df9d92a2fb", "text": "We present an analysis of the population dynamics and demographics of Amazon Mechanical Turk workers based on the results of the survey that we conducted over a period of 28 months, with more than 85K responses from 40K unique participants. The demographics survey is ongoing (as of November 2017), and the results are available at http://demographics.mturk-tracker.com: we provide an API for researchers to download the survey data. We use techniques from the field of ecology, in particular, the capture-recapture technique, to understand the size and dynamics of the underlying population. We also demonstrate how to model and account for the inherent selection biases in such surveys. Our results indicate that there are more than 100K workers available in Amazon»s crowdsourcing platform, the participation of the workers in the platform follows a heavy-tailed distribution, and at any given time there are more than 2K active workers. We also show that the half-life of a worker on the platform is around 12-18 months and that the rate of arrival of new workers balances the rate of departures, keeping the overall worker population relatively stable. Finally, we demonstrate how we can estimate the biases of different demographics to participate in the survey tasks, and show how to correct such biases. Our methodology is generic and can be applied to any platform where we are interested in understanding the dynamics and demographics of the underlying user population.", "title": "" } ]
scidocsrr
84997cf3e49c09bbc79045a6ce4e3810
Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering
[ { "docid": "34a7d306a788ab925db8d0afe4c21c5a", "text": "The Sandbox is a flexible and expressive thinking environment that supports both ad-hoc and more formal analytical tasks. It is the evidence marshalling and sensemaking component for the analytical software environment called nSpace. This paper presents innovative Sandbox human information interaction capabilities and the rationale underlying them including direct observations of analysis work as well as structured interviews. Key capabilities for the Sandbox include “put-this-there” cognition, automatic process model templates, gestures for the fluid expression of thought, assertions with evidence and scalability mechanisms to support larger analysis tasks. The Sandbox integrates advanced computational linguistic functions using a Web Services interface and protocol. An independent third party evaluation experiment with the Sandbox has been completed. The experiment showed that analyst subjects using the Sandbox did higher quality analysis in less time than with standard tools. Usability test results indicated the analysts became proficient in using the Sandbox with three hours of training.", "title": "" }, { "docid": "cff44da2e1038c8e5707cdde37bc5461", "text": "Visual analytics emphasizes sensemaking of large, complex datasets through interactively exploring visualizations generated by statistical models. For example, dimensionality reduction methods use various similarity metrics to visualize textual document collections in a spatial metaphor, where similarities between documents are approximately represented through their relative spatial distances to each other in a 2D layout. This metaphor is designed to mimic analysts' mental models of the document collection and support their analytic processes, such as clustering similar documents together. However, in current methods, users must interact with such visualizations using controls external to the visual metaphor, such as sliders, menus, or text fields, to directly control underlying model parameters that they do not understand and that do not relate to their analytic process occurring within the visual metaphor. In this paper, we present the opportunity for a new design space for visual analytic interaction, called semantic interaction, which seeks to enable analysts to spatially interact with such models directly within the visual metaphor using interactions that derive from their analytic process, such as searching, highlighting, annotating, and repositioning documents. Further, we demonstrate how semantic interactions can be implemented using machine learning techniques in a visual analytic tool, called ForceSPIRE, for interactive analysis of textual data within a spatial visualization. Analysts can express their expert domain knowledge about the documents by simply moving them, which guides the underlying model to improve the overall layout, taking the user's feedback into account.", "title": "" } ]
[ { "docid": "85a93038a98a4744c3b574f664a1199c", "text": "This paper describes our construction of named-entity recognition (NER) systems in two Western Iranian languages, Sorani Kurdish and Tajik, as a part of a pilot study of Linguistic Rapid Response to potential emergency humanitarian relief situations. In the absence of large annotated corpora, parallel corpora, treebanks, bilingual lexica, etc., we found the following to be effective: exploiting distributional regularities in monolingual data, projecting information across closely related languages, and utilizing human linguist judgments. We show promising results on both a four-month exercise in Sorani and a two-day exercise in Tajik, achieved with minimal annotation costs.", "title": "" }, { "docid": "bda0ae59319660987e9d2686d98e4b9a", "text": "Due to the shift from software-as-a-product (SaaP) to software-as-a-service (SaaS), software components that were developed to run in a single address space must increasingly be accessed remotely across the network. Distribution middleware is frequently used to facilitate this transition. Yet a range of middleware platforms exist, and there are few existing guidelines to help the programmer choose an appropriate middleware platform to achieve desired goals for performance, expressiveness, and reliability. To address this limitation, in this paper we describe a case study of transitioning an Open Service Gateway Initiative (OSGi) service from local to remote access. Our case study compares five remote versions of this service, constructed using different distribution middleware platforms. These platforms are implemented by widely-used commercial technologies or have been proposed as improvements on the state of the art. In particular, we implemented a service-oriented version of our own Remote Batch Invocation abstraction. We compare and contrast these implementations in terms of their respective performance, expressiveness, and reliability. Our results can help remote service programmers make informed decisions when choosing middleware platforms for their applications.", "title": "" }, { "docid": "ce1b4c5e15fd1d0777c26ca93a9cadbd", "text": "In early studies on energy metabolism of tumor cells, it was proposed that the enhanced glycolysis was induced by a decreased oxidative phosphorylation. Since then it has been indiscriminately applied to all types of tumor cells that the ATP supply is mainly or only provided by glycolysis, without an appropriate experimental evaluation. In this review, the different genetic and biochemical mechanisms by which tumor cells achieve an enhanced glycolytic flux are analyzed. Furthermore, the proposed mechanisms that arguably lead to a decreased oxidative phosphorylation in tumor cells are discussed. As the O(2) concentration in hypoxic regions of tumors seems not to be limiting for the functioning of oxidative phosphorylation, this pathway is re-evaluated regarding oxidizable substrate utilization and its contribution to ATP supply versus glycolysis. In the tumor cell lines where the oxidative metabolism prevails over the glycolytic metabolism for ATP supply, the flux control distribution of both pathways is described. The effect of glycolytic and mitochondrial drugs on tumor energy metabolism and cellular proliferation is described and discussed. Similarly, the energy metabolic changes associated with inherent and acquired resistance to radiotherapy and chemotherapy of tumor cells, and those determined by positron emission tomography, are revised. It is proposed that energy metabolism may be an alternative therapeutic target for both hypoxic (glycolytic) and oxidative tumors.", "title": "" }, { "docid": "ae87441b3ce5fd388101dc85ad25b558", "text": "University of Tampere School of Management Author: MIIA HANNOLA Title: Critical factors in Customer Relationship Management system implementation Master’s thesis: 84 pages, 2 appendices Date: November 2016", "title": "" }, { "docid": "72cff051b5d2bcd8eaf41b6e9ae9eca9", "text": "We propose a new method for detecting patterns of anomalies in categorical datasets. We assume that anomalies are generated by some underlying process which affects only a particular subset of the data. Our method consists of two steps: we first use a \"local anomaly detector\" to identify individual records with anomalous attribute values, and then detect patterns where the number of anomalous records is higher than expected. Given the set of anomalies flagged by the local anomaly detector, we search over all subsets of the data defined by any set of fixed values of a subset of the attributes, in order to detect self-similar patterns of anomalies. We wish to detect any such subset of the test data which displays a significant increase in anomalous activity as compared to the normal behavior of the system (as indicated by the training data). We perform significance testing to determine if the number of anomalies in any subset of the test data is significantly higher than expected, and propose an efficient algorithm to perform this test over all such subsets of the data. We show that this algorithm is able to accurately detect anomalous patterns in real-world hospital, container shipping and network intrusion data.", "title": "" }, { "docid": "57ab94ce902f4a8b0082cc4f42cd3b3f", "text": "In this work, we present a novel counter-fitting method which injects antonymy and synonymy constraints into vector space representations in order to improve the vectors’ capability for judging semantic similarity. Applying this method to publicly available pre-trained word vectors leads to a new state of the art performance on the SimLex-999 dataset. We also show how the method can be used to tailor the word vector space for the downstream task of dialogue state tracking, resulting in robust improvements across different dialogue domains.", "title": "" }, { "docid": "9c4a4a3253e7a279c1c2fd5582838942", "text": "This Preview Edition of Designing Data-Intensive Applications, Chapters 1 and 2, is a work in progress. The final book is currently scheduled for release in July 2015 and will be available at oreilly.com and other retailers once it is published. O'Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/ institutional sales department: 800-998-9938 or [email protected]. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O'Reilly Media, Inc. was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. Thinking About Data Systems 4 Reliability 6 Hardware faults 7 Software errors 8 Human errors 9 How important is reliability? 9 Scalability 10 Describing load 10 Describing performance 13 Approaches for coping with load 16 Maintainability 17 Operability: making life easy for operations 18 Simplicity: managing complexity 19 Plasticity: making change easy 20 Summary 21 Relational Model vs. Document Model 26 The birth of NoSQL 27 The object-relational mismatch 27 Many-to-one and many-to-many relationships 31 Are document databases repeating history? 34 Relational vs. document databases today 36 Query Languages for Data 40 Declarative queries on the web 41 MapReduce querying 43 iii", "title": "" }, { "docid": "7d23d8d233a3fc7ff75edf361acbe642", "text": "The diagnosis and treatment of chronic patellar instability caused by trochlear dysplasia can be challenging. A dysplastic trochlea leads to biomechanical and kinematic changes that often require surgical correction when symptomatic. In the past, trochlear dysplasia was classified using the 4-part Dejour classification system. More recently, new classification systems have been proposed. Future studies are needed to investigate long-term outcomes after trochleoplasty.", "title": "" }, { "docid": "7cf28eb429d724d9213aed8aa1f192ec", "text": "Idiom token classification is the task of deciding for a set of potentially idiomatic phrases whether each occurrence of a phrase is a literal or idiomatic usage of the phrase. In this work we explore the use of Skip-Thought Vectors to create distributed representations that encode features that are predictive with respect to idiom token classification. We show that classifiers using these representations have competitive performance compared with the state of the art in idiom token classification. Importantly, however, our models use only the sentence containing the target phrase as input and are thus less dependent on a potentially inaccurate or incomplete model of discourse context. We further demonstrate the feasibility of using these representations to train a competitive general idiom token classifier.", "title": "" }, { "docid": "8a380625849ba678e8fed0e837510423", "text": "Congestive heart failure (CHF) is a common clinical disorder that results in pulmonary vascular congestion and reduced cardiac output. CHF should be considered in the differential diagnosis of any adult patient who presents with dyspnea and/or respiratory failure. The diagnosis of heart failure is often determined by a careful history and physical examination and characteristic chest-radiograph findings. The measurement of serum brain natriuretic peptide and echocardiography have substantially improved the accuracy of diagnosis. Therapy for CHF is directed at restoring normal cardiopulmonary physiology and reducing the hyperadrenergic state. The cornerstone of treatment is a combination of an angiotensin-converting-enzyme inhibitor and slow titration of a beta blocker. Patients with CHF are prone to pulmonary complications, including obstructive sleep apnea, pulmonary edema, and pleural effusions. Continuous positive airway pressure and noninvasive positive-pressure ventilation benefit patients in CHF exacerbations.", "title": "" }, { "docid": "10d53a05fcfb93231ab100be7eeb6482", "text": "We present a computer audition system that can both annotate novel audio tracks with semantically meaningful words and retrieve relevant tracks from a database of unlabeled audio content given a text-based query. We consider the related tasks of content-based audio annotation and retrieval as one supervised multiclass, multilabel problem in which we model the joint probability of acoustic features and words. We collect a data set of 1700 human-generated annotations that describe 500 Western popular music tracks. For each word in a vocabulary, we use this data to train a Gaussian mixture model (GMM) over an audio feature space. We estimate the parameters of the model using the weighted mixture hierarchies expectation maximization algorithm. This algorithm is more scalable to large data sets and produces better density estimates than standard parameter estimation techniques. The quality of the music annotations produced by our system is comparable with the performance of humans on the same task. Our ldquoquery-by-textrdquo system can retrieve appropriate songs for a large number of musically relevant words. We also show that our audition system is general by learning a model that can annotate and retrieve sound effects.", "title": "" }, { "docid": "46106f37992ec3089dfe6bfd31e699c8", "text": "For doubly fed induction generator (DFIG)-based wind turbine, the main constraint to ride-through serious grid faults is the limited converter rating. In order to realize controllable low voltage ride through (LVRT) under the typical converter rating, transient control reference usually need to be modified to adapt to the constraint of converter's maximum output voltage. Generally, the generation of such reference relies on observation of stator flux and even sequence separation. This is susceptible to observation errors during the fault transient; moreover, it increases the complexity of control system. For this issue, this paper proposes a scaled current tracking control for rotor-side converter (RSC) to enhance its LVRT capacity without flux observation. In this method, rotor current is controlled to track stator current in a certain scale. Under proper tracking coefficient, both the required rotor current and rotor voltage can be constrained within the permissible ranges of RSC, thus it can maintain DFIG under control to suppress overcurrent and overvoltage. Moreover, during fault transient, electromagnetic torque oscillations can be greatly suppressed. Based on it, certain additional positive-sequence item is injected into rotor current reference to supply dynamic reactive support. Simulation and experimental results demonstrate the feasibility of the proposed method.", "title": "" }, { "docid": "05894f874111fd55bd856d4768c61abe", "text": "Collision detection is of paramount importance for many applications in computer graphics and visualization. Typically, the input to a collision detection algorithm is a large number of geometric objects comprising an environment, together with a set of objects moving within the environment. In addition to determining accurately the contacts that occur between pairs of objects, one needs also to do so at real-time rates. Applications such as haptic force-feedback can require over 1,000 collision queries per second. In this paper, we develop and analyze a method, based on bounding-volume hierarchies, for efficient collision detection for objects moving within highly complex environments. Our choice of bounding volume is to use a “discrete orientation polytope” (“k-dop”), a convex polytope whose facets are determined by halfspaces whose outward normals come from a small fixed set of k orientations. We compare a variety of methods for constructing hierarchies (“BV-trees”) of bounding k-dops. Further, we propose algorithms for maintaining an effective BV-tree of k-dops for moving objects, as they rotate, and for performing fast collision detection using BV-trees of the moving objects and of the environment. Our algorithms have been implemented and tested. We provide experimental evidence showing that our approach yields substantially faster collision detection than previous methods.", "title": "" }, { "docid": "00639757a1a60fe8e56b868bd6e2ff62", "text": "Giant congenital melanocytic nevus is usually defined as a melanocytic lesion present at birth that will reach a diameter ≥ 20 cm in adulthood. Its incidence is estimated in <1:20,000 newborns. Despite its rarity, this lesion is important because it may associate with severe complications such as malignant melanoma, affect the central nervous system (neurocutaneous melanosis), and have major psychosocial impact on the patient and his family due to its unsightly appearance. Giant congenital melanocytic nevus generally presents as a brown lesion, with flat or mammilated surface, well-demarcated borders and hypertrichosis. Congenital melanocytic nevus is primarily a clinical diagnosis. However, congenital nevi are histologically distinguished from acquired nevi mainly by their larger size, the spread of the nevus cells to the deep layers of the skin and by their more varied architecture and morphology. Although giant congenital melanocytic nevus is recognized as a risk factor for the development of melanoma, the precise magnitude of this risk is still controversial. The estimated lifetime risk of developing melanoma varies from 5 to 10%. On account of these uncertainties and the size of the lesions, the management of giant congenital melanocytic nevus needs individualization. Treatment may include surgical and non-surgical procedures, psychological intervention and/or clinical follow-up, with special attention to changes in color, texture or on the surface of the lesion. The only absolute indication for surgery in giant congenital melanocytic nevus is the development of a malignant neoplasm on the lesion.", "title": "" }, { "docid": "5d8f33b7f28e6a8d25d7a02c1f081af1", "text": "Background The life sciences, biomedicine and health care are increasingly turning into a data intensive science [2-4]. Particularly in bioinformatics and computational biology we face not only increased volume and a diversity of highly complex, multi-dimensional and often weaklystructured and noisy data [5-8], but also the growing need for integrative analysis and modeling [9-14]. Due to the increasing trend towards personalized and precision medicine (P4 medicine: Predictive, Preventive, Participatory, Personalized [15]), biomedical data today results from various sources in different structural dimensions, ranging from the microscopic world, and in particular from the omics world (e.g., from genomics, proteomics, metabolomics, lipidomics, transcriptomics, epigenetics, microbiomics, fluxomics, phenomics, etc.) to the macroscopic world (e.g., disease spreading data of populations in public health informatics), see Figure 1[16]. Just for rapid orientation in terms of size: the Glucose molecule has a size of 900 pm = 900× 10−12m and the Carbon atom approx. 300 pm . A hepatitis virus is relatively large with 45nm = 45× 10−9m and the X-Chromosome much bigger with 7μm = 7× 10−6m . We produce most of the “Big Data” in the omics world, we estimate many Terabytes ( 1TB = 1× 10 Byte = 1000 GByte) of genomics data in each individual, consequently, the fusion of these with Petabytes of proteomics data for personalized medicine results in Exabytes of data (1 EB = 1× 1018 Byte ). Last but not least, this “natural” data is then fused together with “produced” data, e.g., the unstructured information (text) in the patient records, wellness data, the data from physiological sensors, laboratory data etc. these data are also rapidly increasing in size and complexity. Besides the problem of heterogeneous and distributed data, we are confronted with noisy, missing and inconsistent data. This leaves a large gap between the available “dirty” data [17] and the machinery to effectively process the data for the application purposes; moreover, the procedures of data integration and information extraction may themselves introduce errors and artifacts in the data [18]. Although, one may argue that “Big Data” is a buzz word, systematic and comprehensive exploration of all these data is often seen as the fourth paradigm in the investigation of nature after empiricism, theory and computation [19], and provides a mechanism for data driven hypotheses generation, optimized experiment planning, precision medicine and evidence-based medicine. The challenge is not only to extract meaningful information from this data, but to gain knowledge, to discover previously unknown insight, look for patterns, and to make sense of the data [20], [21]. Many different approaches, including statistical and graph theoretical methods, data mining, and machine learning methods, have been applied in the past however with partly unsatisfactory success [22,23] especially in terms of performance [24]. The grand challenge is to make data useful to and useable by the end user [25]. Maybe, the key challenge is interaction, due to the fact that it is the human end user who possesses the problem solving intelligence [26], hence the ability to ask intelligent questions about the data. The problem in the life sciences is that (biomedical) data models are characterized by significant complexity [27], [28], making manual analysis by the end users difficult and often impossible [29]. At the same time, human * Correspondence: [email protected] Research Unit Human-Computer Interaction, Austrian IBM Watson Think Group, Institute for Medical Informatics, Statistics & Documentation, Medical University Graz, Austria Full list of author information is available at the end of the article Holzinger et al. BMC Bioinformatics 2014, 15(Suppl 6):I1 http://www.biomedcentral.com/1471-2105/15/S6/I1", "title": "" }, { "docid": "b23230f0386f185b7d5eb191034d58ec", "text": "Risk management in global information technology (IT) projects is becoming a critical area of concern for practitioners. Global IT projects usually span multiple locations involving various culturally diverse groups that use multiple standards and technologies. These multiplicities cause dynamic risks through interactions among internal (i.e., people, process, and technology) and external elements (i.e., business and natural environments) of global IT projects. This study proposes an agile risk-management framework for global IT project settings. By analyzing the dynamic interactions among multiplicities (e.g., multi-locations, multi-cultures, multi-groups, and multi-interests) embedded in the project elements, we identify the dynamic risks threatening the success of a global IT project. Adopting the principles of service-oriented architecture (SOA), we further propose a set of agile management strategies for mitigating the dynamic risks. The mitigation strategies are conceptually validated. The proposed framework will help practitioners understand the potential risks in their global IT projects and resolve their complex situations when certain types of dynamic risks arise.", "title": "" }, { "docid": "6e9ed92dc37e2d7e7ed956ed7b880ff2", "text": "Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available and that the constraint gradients are sparse. We discuss an SQP algorithm that uses a smooth augmented Lagrangian merit function and makes explicit provision for infeasibility in the original problem and the QP subproblems. SNOPT is a particular implementation that makes use of a semidefinite QP solver. It is based on a limited-memory quasi-Newton approximation to the Hessian of the Lagrangian and uses a reduced-Hessian algorithm (SQOPT) for solving the QP subproblems. It is designed for problems with many thousands of constraints and variables but a moderate number of degrees of freedom (say, up to 2000). An important application is to trajectory optimization in the aerospace industry. Numerical results are given for most problems in the CUTE and COPS test collections (about 900 examples).", "title": "" }, { "docid": "824bcc0f9f4e71eb749a04f441891200", "text": "We characterize the singular values of the linear transformation associated with a convolution applied to a two-dimensional feature map with multiple channels. Our characterization enables efficient computation of the singular values of convolutional layers used in popular deep neural network architectures. It also leads to an algorithm for projecting a convolutional layer onto the set of layers obeying a bound on the operator norm of the layer. We show that this is an effective regularizer; periodically applying these projections during training improves the test error of a residual network on CIFAR-10 from 6.2% to 5.3%.", "title": "" }, { "docid": "e68da0df82ade1ef0ff2e0b26da4cb4e", "text": "What service-quality attributes must Internet banks offer to induce consumers to switch to online transactions and keep using them?", "title": "" }, { "docid": "f466b3200413fc5b06101a19741ca395", "text": "This paper present a circular patch microstrip array antenna operate in KU-band (10.9GHz – 17.25GHz). The proposed circular patch array antenna will be in light weight, flexible, slim and compact unit compare with current antenna used in KU-band. The paper also presents the detail steps of designing the circular patch microstrip array antenna. An Advance Design System (ADS) software is used to compute the gain, power, radiation pattern, and S11 of the antenna. The proposed Circular patch microstrip array antenna basically is a phased array consisting of ‘n’ elements (circular patch antennas) arranged in a rectangular grid. The size of each element is determined by the operating frequency. The incident wave from satellite arrives at the plane of the antenna with equal phase across the surface of the array. Each ‘n’ element receives a small amount of power in phase with the others. There are feed network connects each element to the microstrip lines with an equal length, thus the signals reaching the circular patches are all combined in phase and the voltages add up. The significant difference of the circular patch array antenna is not come in the phase across the surface but in the magnitude distribution. Keywords—Circular patch microstrip array antenna, gain, radiation pattern, S-Parameter.", "title": "" } ]
scidocsrr
ce039a9a63bbaf7898379e83b597090f
On brewing fresh espresso: LinkedIn's distributed data serving platform
[ { "docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0", "text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.", "title": "" } ]
[ { "docid": "7974d8e70775f1b7ef4d8c9aefae870e", "text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.", "title": "" }, { "docid": "80c21770ada160225e17cb9673fff3b3", "text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.", "title": "" }, { "docid": "8b515e03e551d120db9ce670d930adeb", "text": "In this letter, a broadband planar substrate truncated tapered microstrip line-to-dielectric image line transition on a single substrate is proposed. The design uses substrate truncated microstrip line which helps to minimize the losses due to the surface wave generation on thick microstrip line. Generalized empirical equations are proposed for the transition design and validated for different dielectric constants in millimeter-wave frequency band. Full-wave simulations are carried out using high frequency structural simulator. A back-to-back transition prototype of Ku-band is fabricated and measured. The measured return loss for 80-mm-long structure is better than 10 dB and the insertion loss is better than 2.5 dB in entire Ku-band (40% impedance bandwidth).", "title": "" }, { "docid": "a93361b09b4aaf1385569a9efce7087e", "text": "Cortical surface mapping has been widely used to compensate for individual variability of cortical shape and topology in anatomical and functional studies. While many surface mapping methods were proposed based on landmarks, curves, spherical or native cortical coordinates, few studies have extensively and quantitatively evaluated surface mapping methods across different methodologies. In this study we compared five cortical surface mapping algorithms, including large deformation diffeomorphic metric mapping (LDDMM) for curves (LDDMM-curve), for surfaces (LDDMM-surface), multi-manifold LDDMM (MM-LDDMM), FreeSurfer, and CARET, using 40 MRI scans and 10 simulated datasets. We computed curve variation errors and surface alignment consistency for assessing the mapping accuracy of local cortical features (e.g., gyral/sulcal curves and sulcal regions) and the curvature correlation for measuring the mapping accuracy in terms of overall cortical shape. In addition, the simulated datasets facilitated the investigation of mapping error distribution over the cortical surface when the MM-LDDMM, FreeSurfer, and CARET mapping algorithms were applied. Our results revealed that the LDDMM-curve, MM-LDDMM, and CARET approaches best aligned the local curve features with their own curves. The MM-LDDMM approach was also found to be the best in aligning the local regions and cortical folding patterns (e.g., curvature) as compared to the other mapping approaches. The simulation experiment showed that the MM-LDDMM mapping yielded less local and global deformation errors than the CARET and FreeSurfer mappings.", "title": "" }, { "docid": "33b281b2f3509a6fdc3fd5f17f219820", "text": "Personal robots will contribute mobile manipulation capabilities to our future smart homes. In this paper, we propose a low-cost object localization system that uses static devices with Bluetooth capabilities, which are distributed in an environment, to detect and localize active Bluetooth beacons and mobile devices. This system can be used by a robot to coarsely localize objects in retrieval tasks. We attach small Bluetooth low energy tags to objects and require at least four static Bluetooth receivers. While commodity Bluetooth devices could be used, we have built low-cost receivers from Raspberry Pi computers. The location of a tag is estimated by lateration of its received signal strengths. In experiments, we evaluate accuracy and timing of our approach, and report on the successful demonstration at the RoboCup German Open 2014 competition in Magdeburg.", "title": "" }, { "docid": "7ce147a433a376dd1cc0f7f09576e1bd", "text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).", "title": "" }, { "docid": "eb962e14f34ea53dec660dfe304756b0", "text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.", "title": "" }, { "docid": "6cd301f1b6ffe64f95b7d63eb0356a87", "text": "The purpose of this study is to analyze factors affecting on online shopping behavior of consumers that might be one of the most important issues of e-commerce and marketing field. However, there is very limited knowledge about online consumer behavior because it is a complicated socio-technical phenomenon and involves too many factors. One of the objectives of this study is covering the shortcomings of previous studies that didn't examine main factors that influence on online shopping behavior. This goal has been followed by using a model examining the impact of perceived risks, infrastructural variables and return policy on attitude toward online shopping behavior and subjective norms, perceived behavioral control, domain specific innovativeness and attitude on online shopping behavior as the hypotheses of study. To investigate these hypotheses 200 questionnaires dispersed among online stores of Iran. Respondents to the questionnaire were consumers of online stores in Iran which randomly selected. Finally regression analysis was used on data in order to test hypothesizes of study. This study can be considered as an applied research from purpose perspective and descriptive-survey with regard to the nature and method (type of correlation). The study identified that financial risks and non-delivery risk negatively affected attitude toward online shopping. Results also indicated that domain specific innovativeness and subjective norms positively affect online shopping behavior. Furthermore, attitude toward online shopping positively affected online shopping behavior of consumers.", "title": "" }, { "docid": "7a8ded6daecbee4492f19ef85c92b0fd", "text": "Sleep problems bave become epidemic aod traditional research has discovered many causes of poor sleep. The purpose of this study was to complement existiog research by using a salutogenic or health origins framework to investigate the correlates of good sleep. The aoalysis for this study used the National College Health Assessment data that included 54,111 participaots at 71 institutions. Participaots were raodomly selected or were in raodomly selected classrooms. Results of these aoalyses indicated that males aod females who reported \"good sleep\" were more likely to have engaged regularly in physical activity, felt less exhausted, were more likely to have a healthy Body Mass Index (BMI), aod also performed better academically. In addition, good male sleepers experienced less anxietY aod had less back pain. Good female sleepers also had fewer abusive relationships aod fewer broken bones, were more likely to have been nonsmokers aod were not binge drinkers. Despite the limitations of this exploratory study, these results are compelling, however they suggest the need for future research to clarify the identified relationships.", "title": "" }, { "docid": "a6f534f6d6a27b076cee44a8a188bb72", "text": "Managing models requires extracting information from them and modifying them, and this is performed through queries. Queries can be executed at the model or at the persistence-level. Both are complementary but while model-level queries are closer to modelling engineers, persistence-level queries are specific to the persistence technology and leverage its capabilities. This paper presents MQT, an approach that translates EOL (model-level queries) to SQL (persistence-level queries) at runtime. Runtime translation provides several benefits: (i) queries are executed only when the information is required; (ii) context and metamodel information is used to get more performant translated queries; and (iii) supports translating query programs using variables and dependant queries. Translation process used by MQT is described through two examples and we also evaluate performance of the approach.", "title": "" }, { "docid": "87b5c0021e513898693e575ca5479757", "text": "We present a statistical mechanics model of deep feed forward neural networks (FFN). Our energy-based approach naturally explains several known results and heuristics, providing a solid theoretical framework and new instruments for a systematic development of FFN. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. We obtain a set of natural activations – such as sigmoid, tanh and ReLu – together with a state-of-the-art one, recently obtained by Ramachandran et al. [1] using an extensive search algorithm. We term this activation ESP (Expected Signal Propagation), explain its probabilistic meaning, and study the eigenvalue spectrum of the associated Hessian on classification tasks. We find that ESP allows for faster training and more consistent performances over a wide range of network architectures.", "title": "" }, { "docid": "058a128a15c7d0e343adb3ada80e18d3", "text": "PURPOSE OF REVIEW\nOdontogenic causes of sinusitis are frequently missed; clinicians often overlook odontogenic disease whenever examining individuals with symptomatic rhinosinusitis. Conventional treatments for chronic rhinosinusitis (CRS) will often fail in odontogenic sinusitis. There have been several recent developments in the understanding of mechanisms, diagnosis, and treatment of odontogenic sinusitis, and clinicians should be aware of these advances to best treat this patient population.\n\n\nRECENT FINDINGS\nThe majority of odontogenic disease is caused by periodontitis and iatrogenesis. Notably, dental pain or dental hypersensitivity is very commonly absent in odontogenic sinusitis, and symptoms are very similar to those seen in CRS overall. Unilaterality of nasal obstruction and foul nasal drainage are most suggestive of odontogenic sinusitis, but computed tomography is the gold standard for diagnosis. Conventional panoramic radiographs are very poorly suited to rule out odontogenic sinusitis, and cannot be relied on to identify disease. There does not appear to be an optimal sequence of treatment for odontogenic sinusitis; the dental source should be addressed and ESS is frequently also necessary to alleviate symptoms.\n\n\nSUMMARY\nOdontogenic sinusitis has distinct pathophysiology, diagnostic considerations, microbiology, and treatment strategies whenever compared with chronic rhinosinusitis. Clinicians who can accurately identify odontogenic sources can increase efficacy of medical and surgical treatments and improve patient outcomes.", "title": "" }, { "docid": "e2e640c34a9c30a24b068afa23f916d4", "text": "BACKGROUND\nMicrosurgical resection of arteriovenous malformations (AVMs) located in the language and motor cortex is associated with the risk of neurological deterioration, yet electrocortical stimulation mapping has not been widely used.\n\n\nOBJECTIVE\nTo demonstrate the usefulness of intraoperative mapping with language/motor AVMs.\n\n\nMETHODS\nDuring an 11-year period, mapping was used in 12 of 431 patients (2.8%) undergoing AVM resection (5 patients with language and 7 patients with motor AVMs). Language mapping was performed under awake anesthesia and motor mapping under general anesthesia.\n\n\nRESULTS\nIdentification of a functional cortex enabled its preservation in 11 patients (92%), guided dissection through overlying sulci down to the nidus in 3 patients (25%), and influenced the extent of resection in 4 patients (33%). Eight patients (67%) had complete resections. Four patients (33%) had incomplete resections, with circumferentially dissected and subtotally disconnected AVMs left in situ, attached to areas of eloquence and with preserved venous drainage. All were subsequently treated with radiosurgery. At follow-up, 6 patients recovered completely, 3 patients were neurologically improved, and 3 patients had new neurological deficits.\n\n\nCONCLUSION\nIndications for intraoperative mapping include preoperative functional imaging that identifies the language/motor cortex adjacent to the AVM; larger AVMs with higher Spetzler-Martin grades; and patients presenting with unruptured AVMs without deficits. Mapping identified the functional cortex, promoted careful tissue handling, and preserved function. Mapping may guide dissection to AVMs beneath the cortical surface, and it may impact the decision to resect the AVM completely. More conservative, subtotal circumdissections followed by radiosurgery may be an alternative to observation or radiosurgery alone in patients with larger language/motor cortex AVMs.", "title": "" }, { "docid": "aee5eb38d6cbcb67de709a30dd37c29a", "text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.", "title": "" }, { "docid": "554a0628270978757eda989c67ac3416", "text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "3b000325d8324942fc192c3df319c21d", "text": "The proposed automatic bone age estimation system was based on the phalanx geometric characteristics and carpals fuzzy information. The system could do automatic calibration by analyzing the geometric properties of hand images. Physiological and morphological features are extracted from medius image in segmentation stage. Back-propagation, radial basis function, and support vector machine neural networks were applied to classify the phalanx bone age. In addition, the proposed fuzzy bone age (BA) assessment was based on normalized bone area ratio of carpals. The result reveals that the carpal features can effectively reduce classification errors when age is less than 9 years old. Meanwhile, carpal features will become less influential to assess BA when children grow up to 10 years old. On the other hand, phalanx features become the significant parameters to depict the bone maturity from 10 years old to adult stage. Owing to these properties, the proposed novel BA assessment system combined the phalanxes and carpals assessment. Furthermore, the system adopted not only neural network classifiers but fuzzy bone age confinement and got a result nearly to be practical clinically.", "title": "" }, { "docid": "61a9bc06d96eb213ed5142bfa47920b9", "text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.", "title": "" }, { "docid": "e4179fd890a55f829e398a6f80f1d26a", "text": "This paper presents a soft-start circuit that adopts a pulse-skipping control to prevent inrush current and output voltage overshoot during the start-up period of dc-dc converters. The purpose of the pulse-skipping control is to significantly restrain the increasing rate of the reference voltage of the error amplifier. Thanks to the pulse-skipping mechanism and the duty cycle minimization, the soft-start-up time can be extended and the restriction of the charging current and the capacitance can be relaxed. The proposed soft-start circuit is fully integrated on chip without external components, leading to a reduction in PCB area and cost. A current-mode buck converter is implemented with TSMC 0.35-μm 2P4M CMOS process. Simulation results show the output voltage of the buck converter increases smoothly and inrush current is less than 300 mA.", "title": "" }, { "docid": "69504625b05c735dd80135ef106a8677", "text": "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the groundtruth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.", "title": "" } ]
scidocsrr
ce005239bc1f2180ad8508470e4a168d
Agent-based decision-making process in airport ground handling management
[ { "docid": "b20aa2222759644b4b60b5b450424c9e", "text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "36b609f1c748154f0f6193c6578acec9", "text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "df7a68ebb9bc03d8a73a54ab3474373f", "text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.", "title": "" }, { "docid": "720a3d65af4905cbffe74ab21d21dd3f", "text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.", "title": "" }, { "docid": "6f1e71399e5786eb9c3923a1e967cd8f", "text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23", "title": "" }, { "docid": "39d15901cd5fbd1629d64a165a94c5f5", "text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.", "title": "" }, { "docid": "01e064e0f2267de5a26765f945114a6e", "text": "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples", "title": "" }, { "docid": "4d445832d38c288b1b59a3df7b38eb1b", "text": "UNLABELLED\nThe aim of this prospective study was to assess the predictive value of (18)F-FDG PET/CT imaging for pathologic response to neoadjuvant chemotherapy (NACT) and outcome in inflammatory breast cancer (IBC) patients.\n\n\nMETHODS\nTwenty-three consecutive patients (51 y ± 12.7) with newly diagnosed IBC, assessed by PET/CT at baseline (PET1), after the third course of NACT (PET2), and before surgery (PET3), were included. The patients were divided into 2 groups according to pathologic response as assessed by the Sataloff classification: pathologic complete response for complete responders (stage TA and NA or NB) and non-pathologic complete response for noncomplete responders (not stage A for tumor or not stage NA or NB for lymph nodes). In addition to maximum standardized uptake value (SUVmax) measurements, a global breast metabolic tumor volume (MTV) was delineated using a semiautomatic segmentation method. Changes in SUVmax and MTV between PET1 and PET2 (ΔSUV1-2; ΔMTV1-2) and PET1 and PET3 (ΔSUV1-3; ΔMTV1-3) were measured.\n\n\nRESULTS\nMean SUVmax on PET1, PET2, and PET3 did not statistically differ between the 2 pathologic response groups. On receiver-operating-characteristic analysis, a 72% cutoff for ΔSUV1-3 provided the best performance to predict residual disease, with sensitivity, specificity, and accuracy of 61%, 80%, and 65%, respectively. On univariate analysis, the 72% cutoff for ΔSUV1-3 was the best predictor of distant metastasis-free survival (P = 0.05). On multivariate analysis, the 72% cutoff for ΔSUV1-3 was an independent predictor of distant metastasis-free survival (P = 0.01).\n\n\nCONCLUSION\nOur results emphasize the good predictive value of change in SUVmax between baseline and before surgery to assess pathologic response and survival in IBC patients undergoing NACT.", "title": "" }, { "docid": "53a67740e444b5951bc6ab257236996e", "text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.", "title": "" }, { "docid": "c7160e93c9cce017adc1200dc7d597f2", "text": "The transcription factor, nuclear factor erythroid 2 p45-related factor 2 (Nrf2), acts as a sensor of oxidative or electrophilic stresses and plays a pivotal role in redox homeostasis. Oxidative or electrophilic agents cause a conformational change in the Nrf2 inhibitory protein Keap1 inducing the nuclear translocation of the transcription factor which, through its binding to the antioxidant/electrophilic response element (ARE/EpRE), regulates the expression of antioxidant and detoxifying genes such as heme oxygenase 1 (HO-1). Nrf2 and HO-1 are frequently upregulated in different types of tumours and correlate with tumour progression, aggressiveness, resistance to therapy, and poor prognosis. This review focuses on the Nrf2/HO-1 stress response mechanism as a promising target for anticancer treatment which is able to overcome resistance to therapies.", "title": "" }, { "docid": "f2c203e9364fee062747468dc7995429", "text": "Microinverters are module-level power electronic (MLPE) systems that are expected to have a service life more than 25 years. The general practice for providing assurance in long-term reliability under humid climatic conditions is to subject the microinverters to ‘damp heat test’ at 85°C/85%RH for 1000hrs as recommended in lEC 61215 standard. However, there is limited understanding on the correlation between the said ‘damp heat’ test and field conditions for microinverters. In this paper, a physics-of-failure (PoF)-based approach is used to correlate damp heat test to field conditions. Results of the PoF approach indicates that even 3000hrs at 85°C/85%RH may not be sufficient to guarantee 25-years' service life in certain places in the world. Furthermore, we also demonstrate that use of Miami, FL weathering data as benchmark for defining damp heat test durations will not be sufficient to guarantee 25 years' service life. Finally, when tests were conducted at 85°C/85%RH for more than 3000hrs, it was found that the PV connectors are likely to fail before the actual power electronics could fail.", "title": "" }, { "docid": "bd8f4d5181d0b0bcaacfccd6fb0edd8b", "text": "Mass deployment of RF identification (RFID) is hindered by its cost per tag. The main cost comes from the application-specific integrated circuit (ASIC) chip set in a tag. A chipless tag costs less than a cent, and these have the potential for mass deployment for low-cost, item-level tagging as the replacement technology for optical barcodes. Chipless RFID tags can be directly printed on paper or plastic packets just like barcodes. They are highly useful for automatic identification and authentication, supply-chain automation, and medical applications. Among their potential industrial applications are authenticating of polymer bank notes; scanning of credit cards, library cards, and the like; tracking of inventory in retail settings; and identification of pathology and other medical test samples.", "title": "" }, { "docid": "f44b5199f93d4b441c125ac55e4e0497", "text": "A modified method for better superpixel generation based on simple linear iterative clustering (SLIC) is presented and named BSLIC in this paper. By initializing cluster centers in hexagon distribution and performing k-means clustering in a limited region, the generated superpixels are shaped into regular and compact hexagons. The additional cluster centers are initialized as edge pixels to improve boundary adherence, which is further promoted by incorporating the boundary term into the distance calculation of the k-means clustering. Berkeley Segmentation Dataset BSDS500 is used to qualitatively and quantitatively evaluate the proposed BSLIC method. Experimental results show that BSLIC achieves an excellent compromise between boundary adherence and regularity of size and shape. In comparison with SLIC, the boundary adherence of BSLIC is increased by at most 12.43% for boundary recall and 3.51% for under segmentation error.", "title": "" }, { "docid": "54bee01d53b8bcb6ca067493993b4ff3", "text": "Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these sampling-based methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL) to model delayed reward with a log-linear function approximation of residual future score improvement. Our method provides dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain.", "title": "" }, { "docid": "6f1fc6a07d0beb235f5279e17a46447f", "text": "Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.", "title": "" }, { "docid": "fad164e21c7ec013450a8b96d75d9457", "text": "Pinterest is a visual discovery tool for collecting and organizing content on the Web with over 70 million users. Users “pin” images, videos, articles, products, and other objects they find on the Web, and organize them into boards by topic. Other users can repin these and also follow other users or boards. Each user organizes things differently, and this produces a vast amount of human-curated content. For example, someone looking to decorate their home might pin many images of furniture that fits their taste. These curated collections produce a large number of associations between pins, and we investigate how to leverage these associations to surface personalized content to users. Little work has been done on the Pinterest network before due to lack of availability of data. We first performed an analysis on a representative sample of the Pinterest network. After analyzing the network, we created recommendation systems, suggesting pins that users would be likely to repin or like based on their previous interactions on Pinterest. We created recommendation systems using four approaches: a baseline recommendation system using the power law distribution of the images; a content-based filtering algorithm; and two collaborative filtering algorithms, one based on one-mode projection of a bipartite graph, and the second using a label propagation approach.", "title": "" }, { "docid": "05477664471a71eebc26d59aed9b0350", "text": "This article serves as a quick reference for respiratory alkalosis. Guidelines for analysis and causes, signs, and a stepwise approach are presented.", "title": "" }, { "docid": "9078698db240725e1eb9d1f088fb05f4", "text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.", "title": "" }, { "docid": "cb641fc639b86abadec4f85efc226c14", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "d775cdc31c84d94d95dc132b88a37fae", "text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.", "title": "" }, { "docid": "4630ade03760cb8ec1da11b16703b3f1", "text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.", "title": "" }, { "docid": "6cb480efca7138e26ce484eb28f0caec", "text": "Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a healthcare setting. We show empirically that active social media management drives more user-generated content. However, we find that this is due to an increase in incremental user postings from an organization’s employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees and clients that are explained by medical marketing laws, medical malpractice laws and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm’s postings are entirely client-focused. However, empirically the majority of firm postings seem not to be specifically targeted to clients’ interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like this provoke activity by employees rather than clients. This may not be a bad thing, as employee-generated content may help with employee motivation, recruitment or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. ∗Economics Department, University of Virginia, Charlottesville, VA and RAND Corporation †MIT Sloan School of Management, MIT, Cambridge, MA and NBER ‡All errors are our own.", "title": "" } ]
scidocsrr
53cdbbf8e5d99570f01d5a6de645d932
Microstrip high-pass filter with attenuation poles using cross-coupling
[ { "docid": "7e61b5f63d325505209c3284c8a444a1", "text": "A method to design low-pass filters (LPF) having a defected ground structure (DGS) and broadened transmission-line elements is proposed. The previously presented technique for obtaining a three-stage LPF using DGS by Lim et al. is generalized to propose a method that can be applied in design N-pole LPFs for N/spl les/5. As an example, a five-pole LPF having a DGS is designed and measured. Accurate curve-fitting results and the successive design process to determine the required size of the DGS corresponding to the LPF prototype elements are described. The proposed LPF having a DGS, called a DGS-LPF, includes transmission-line elements with very low impedance instead of open stubs in realizing the required shunt capacitance. Therefore, open stubs, teeor cross-junction elements, and high-impedance line sections are not required for the proposed LPF, while they all have been essential in conventional LPFs. Due to the widely broadened transmission-line elements, the size of the DGS-LPF is compact.", "title": "" } ]
[ { "docid": "9b8b91bbade21813b16dfa40e70c2b91", "text": "to name a few. Because of its importance to the study of emotion, a number of observer-based systems of facial expression measurement have been developed Using FACS and viewing video-recorded facial behavior at frame rate and slow motion, coders can manually code nearly all possible facial expressions, which are decomposed into action units (AUs). Action units, with some qualifications , are the smallest visually discriminable facial movements. By comparison, other systems are less thorough (Malatesta et al., 1989), fail to differentiate between some anatomically distinct movements (Oster, Hegley, & Nagel, 1992), consider movements that are not anatomically distinct as separable (Oster et al., 1992), and often assume a one-to-one mapping between facial expression and emotion (for a review of these systems, see Cohn & Ekman, in press). Unlike systems that use emotion labels to describe expression , FACS explicitly distinguishes between facial actions and inferences about what they mean. FACS itself is descriptive and includes no emotion-specified descriptors. Hypotheses and inferences about the emotional meaning of facial actions are extrinsic to FACS. If one wishes to make emotion based inferences from FACS codes, a variety of related resources exist. These include the FACS Investigators' Guide These resources suggest combination rules for defining emotion-specified expressions from FACS action units, but this inferential step remains extrinsic to FACS. Because of its descriptive power, FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields. Beyond emotion science, these include facial neuromuscular disorders", "title": "" }, { "docid": "ab2e5ec6e48c87b3e4814840ad29afe7", "text": "This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are full parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG's spurious ambiguity, the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG's nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.", "title": "" }, { "docid": "98df90734e276e0cf020acfdcaa9b4b4", "text": "High parallel framework has been proved to be very suitable for graph processing. There are various work to optimize the implementation in FPGAs, a pipeline parallel device. The key to make use of the parallel performance of FPGAs is to process graph data in pipeline model and take advantage of on-chip memory to realize necessary locality process. This paper proposes a modularize graph processing framework, which focus on the whole executing procedure with the extremely different degree of parallelism. The framework has three contributions. First, the combination of vertex-centric and edge-centric processing framework can been adjusting in the executing procedure to accommodate top-down algorithm and bottom-up algorithm. Second, owing to the pipeline parallel and finite on-chip memory accelerator, the novel edge-block, a block consist of edges vertex, achieve optimizing the way to utilize the on-chip memory to group the edges and stream the edges in a block to realize the stream pattern to pipeline parallel processing. Third, depending to the analysis of the block structure of nature graph and the executing characteristics during graph processing, we design a novel conversion dispatcher to change processing module, to match the corresponding exchange point. Our evaluation with four graph applications on five diverse scale graph shows that .", "title": "" }, { "docid": "afd32dd6a9b076ed976ecd612c1cc14f", "text": "Many digital images contain blurred regions which are caused by motion or defocus. Automatic detection and classification of blurred image regions are very important for different multimedia analyzing tasks. This paper presents a simple and effective automatic image blurred region detection and classification technique. In the proposed technique, blurred image regions are first detected by examining singular value information for each image pixels. The blur types (i.e. motion blur or defocus blur) are then determined based on certain alpha channel constraint that requires neither image deblurring nor blur kernel estimation. Extensive experiments have been conducted over a dataset that consists of 200 blurred image regions and 200 image regions with no blur that are extracted from 100 digital images. Experimental results show that the proposed technique detects and classifies the two types of image blurs accurately. The proposed technique can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.", "title": "" }, { "docid": "d1fa477646e636a3062312d6f6444081", "text": "This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.", "title": "" }, { "docid": "75e9253b7c6333db1aa3cef2ab364f99", "text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.", "title": "" }, { "docid": "ff947ccb7efdd5517f9b60f9c11ade6a", "text": "Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.", "title": "" }, { "docid": "68f38ad22fe2c9c24d329b181d1761d2", "text": "Data mining approach can be used to discover knowledge by analyzing the patterns or correlations among of fields in large databases. Data mining approach was used to find the patterns of the data from Tanzania Ministry of Water. It is used to predict current and future status of water pumps in Tanzania. The data mining method proposed is XGBoost (eXtreme Gradient Boosting). XGBoost implement the concept of Gradient Tree Boosting which designed to be highly fast, accurate, efficient, flexible, and portable. In addition, Recursive Feature Elimination (RFE) is also proposed to select the important features of the data to obtain an accurate model. The best accuracy achieved with using 27 input factors selected by RFE and XGBoost as a learning model. The achieved result show 80.38% in accuracy. The information or knowledge which is discovered from data mining approach can be used by the government to improve the inspection planning, maintenance, and identify which factor that can cause damage to the water pumps to ensure the availability of potable water in Tanzania. Using data mining approach is cost-effective, less time consuming and faster than manual inspection.", "title": "" }, { "docid": "997993e389cdb1e40714e20b96927890", "text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.", "title": "" }, { "docid": "f8e4db50272d14f026d0956ac25d39d6", "text": "Automated estimation of the allocation of a driver's visual attention could be a critical component of future advanced driver assistance systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. But in practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects but can't provide as fine-grained of a resolution in localizing the gaze. For the purpose of keeping the driver safe, it's sufficient to partition gaze into regions. In this effort, a proposed system extracts facial features and classifies their spatial configuration into six regions in real time. The proposed method achieves an average accuracy of 91.4 percent at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.", "title": "" }, { "docid": "18bbb75b46f6397a6abab7e0d4af4735", "text": "This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The method is based on a physical model which can also be used in solving for example sensor fusion problems. The experimental results show that the method works well in practice, both for perspective and spherical cameras.", "title": "" }, { "docid": "e69ecf0d4d04a956b53f34673e353de3", "text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.", "title": "" }, { "docid": "fada1434ec6e060eee9a2431688f82f3", "text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.", "title": "" }, { "docid": "e649c3a48eccb6165320356e94f5ed7d", "text": "There have been several attempts to create scalable and hardware independent software architectures for Unmanned Aerial Vehicles (UAV). In this work, we propose an onboard architecture for UAVs where hardware abstraction, data storage and communication between modules are efficiently maintained. All processing and software development is done on the UAV while state and mission status of the UAV is monitored from a ground station. The architecture also allows rapid development of mission-specific third party applications on the vehicle with the help of the core module.", "title": "" }, { "docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5", "text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.", "title": "" }, { "docid": "d2aebe4f8d8d90427bee7c8b71b1361f", "text": "Automated vehicles are complex systems with a high degree of interdependencies between its components. This complexity sets increasing demands for the underlying software framework. This paper firstly analyzes the requirements for software frameworks. Afterwards an overview on existing software frameworks, that have been used for automated driving projects, is provided with an in-depth introduction into an emerging open-source software framework, the Robot Operating System (ROS). After discussing the main features, advantages and disadvantages of ROS, the communication overhead of ROS is analyzed quantitatively in various configurations showing its applicability for systems with a high data load.", "title": "" }, { "docid": "c8bd7e1e70ac2dbe613c6eb8efe3bd5f", "text": "This work aims at constructing a semiotic framework for an expanded evolutionary synthesis grounded on Peirce's universal categories and the six space/time/function relations [Taborsky, E., 2004. The nature of the sign as a WFF--a well-formed formula, SEED J. (Semiosis Evol. Energy Dev.) 4 (4), 5-14] that integrate the Lamarckian (internal/external) and Darwinian (individual/population) cuts. According to these guide lines, it is proposed an attempt to formalize developmental systems theory by using the notion of evolving developing agents (EDA) that provides an internalist model of a general transformative tendency driven by organism's need to cope with environmental uncertainty. Development and evolution are conceived as non-programmed open-ended processes of information increase where EDA reach a functional compromise between: (a) increments of phenotype's uniqueness (stability and specificity) and (b) anticipation to environmental changes. Accordingly, changes in mutual information content between the phenotype/environment drag subsequent changes in mutual information content between genotype/phenotype and genotype/environment at two interwoven scales: individual life cycle (ontogeny) and species time (phylogeny), respectively. Developmental terminal additions along with increment minimization of developmental steps must be positively selected.", "title": "" }, { "docid": "207d3e95d3f04cafa417478ed9133fcc", "text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9e6bfc7b5cc87f687a699c62da013083", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" } ]
scidocsrr
a49cedfb08b746c108e496b0c9f8fa5e
An Ensemble Approach for Incremental Learning in Nonstationary Environments
[ { "docid": "101af2d0539fa1470e8acfcf7c728891", "text": "OnlineEnsembleLearning", "title": "" }, { "docid": "fc5782aa3152ca914c6ca5cf1aef84eb", "text": "We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.", "title": "" } ]
[ { "docid": "8f2be7a7f6b5f5ba1412e8635a6aa755", "text": "In this paper, we propose to infer music genre embeddings from audio datasets carrying semantic information about genres. We show that such embeddings can be used for disambiguating genre tags (identification of different labels for the same genre, tag translation from a tag system to another, inference of hierarchical taxonomies on these genre tags). These embeddings are built by training a deep convolutional neural network genre classifier with large audio datasets annotated with a flat tag system. We show empirically that they makes it possible to retrieve the original taxonomy of a tag system, spot duplicates tags and translate tags from a tag system to another.", "title": "" }, { "docid": "16a30db315374b42d721a91bb5549763", "text": "The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display's field of view. A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. Discrepancies between the geometric and physical FOV causes the imagery to be minified or magnified. This distortion has the potential to negatively or positively affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks.\n In this paper we analyze if a user is consciously aware of perspective distortions of the VE displayed in the HMD. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted an experiment to identify perspective projections for HMDs which are identified as natural by subjects---even if these perspectives deviate from the perspectives that are inherently defined by the display's field of view. We found that subjects evaluate a field of view as natural when it is larger than the actual field of view of the HMD---in some cases up to 50%.", "title": "" }, { "docid": "325d6c44ef7f4d4e642e882a56f439b7", "text": "In announcing the news that “post-truth” is the Oxford Dictionaries’ 2016 word of the year, the Chicago Tribune declared that “Truth is dead. Facts are passé.”1 Politicians have shoveled this mantra our direction for centuries, but during this past presidential election, they really rubbed our collective faces in it. To be fair, the word “post” isn’t to be taken to mean “after,” as in its normal sense, but rather as “irrelevant.” Careful observers of the recent US political campaigns came to appreciate this difference. Candidates spewed streams of rhetorical effluent that didn’t even pretend to pass the most perfunctory fact-checking smell test. As the Tribune noted, far too many voters either didn’t notice or didn’t care. That said, recognizing an unwelcome phenomenon isn’t the same as legitimizing it, and now the Oxford Dictionaries group has gone too far toward the latter. They say “post-truth” captures the “ethos, mood or preoccupations of [2016] to have lasting potential as a word of cultural significance.”1 I emphatically disagree. I don’t know what post-truth did capture, but it didn’t capture that. We need a phrase for the 2016 mood that’s a better fit. I propose the term “gaudy facts,” for it emphasizes the garish and tawdry nature of the recent political dialog. Further, “gaudy facts” has the advantage of avoiding the word truth altogether, since there’s precious little of that in political discourse anyway. I think our new term best captures the ethos and mood of today’s political delusionists. There’s no ground truth data in sight, all claims are imaginary and unsupported without pretense of facts, and distortion is reality. This seems to fit our present experience well. The only tangible remnant of reality that isn’t subsumed under our new term is the speakers’ underlying narcissism, but at least we’re closer than we were with “post-truth.” We need to forever banish the association of the word “truth” with “politics”—these two terms just don’t play well with each other. Lies, Damn Lies, and Fake News", "title": "" }, { "docid": "55658c75bcc3a12c1b3f276050f28355", "text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.", "title": "" }, { "docid": "4edb9dea1e949148598279c0111c4531", "text": "This paper presents a design of highly effective triple band microstrip antenna for wireless communication applications. The triple band design is a metamaterial-based design for WLAN and WiMAX (2.4/3.5/5.6 GHz) applications. The triple band response is obtained by etching two circular and one rectangular split ring resonator (SRR) unit cells on the ground plane of a conventional patch operating at 3.56 GHz. The circular cells are introduced to resonate at 5.3 GHz for the upper WiMAX band, while the rectangular cell is designed to resonate at 2.45 GHz for the lower WLAN band. Furthermore, a novel complementary H-shaped unit cell oriented above the triple band antenna is proposed. The proposed H-shaped is being used as a lens to significantly increase the antenna gain. To investigate the left-handed behavior of the proposed H-shaped, extensive parametric study for the placement of each unit cell including the metamaterial lens, which is the main parameter affecting the antenna performance, is presented and discussed comprehensively. Good consistency between the measured and simulated results is achieved. The proposed antenna meets the requirements of WiMAX and WLAN standards with high peak realized gain.", "title": "" }, { "docid": "c93c690ecb038a87c351d9674f0a881a", "text": "Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research.", "title": "" }, { "docid": "3ec603c63166167c88dc6d578a7c652f", "text": "Peer-to-peer (P2P) lending or crowdlending, is a recent innovation allows a group of individual or institutional lenders to lend funds to individuals or businesses in return for interest payment on top of capital repayments. The rapid growth of P2P lending marketplaces has heightened the need to develop a support system to help lenders make sound lending decisions. But realizing such system is challenging in the absence of formal credit data used by the banking sector. In this paper, we attempt to explore the possible connections between user credit risk and how users behave in the lending sites. We present the first analysis of user detailed clickstream data from a large P2P lending provider. Our analysis reveals that the users’ sequences of repayment histories and financial activities in the lending site, have significant predictive value for their future loan repayments. In the light of this, we propose a deep architecture named DeepCredit, to automatically acquire the knowledge of credit risk from the sequences of activities that users conduct on the site. Experiments on our large-scale real-world dataset show that our model generates a high accuracy in predicting both loan delinquency and default, and significantly outperforms a number of baselines and competitive alternatives.", "title": "" }, { "docid": "5dba3258382d9781287cdcb6b227153c", "text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.", "title": "" }, { "docid": "bfc8a36a8b3f1d74bad5f2e25ad3aae5", "text": "This paper presents a novel ac-dc power factor correction (PFC) power conversion architecture for a single-phase grid interface. The proposed architecture has significant advantages for achieving high efficiency, good power factor, and converter miniaturization, especially in low-to-medium power applications. The architecture enables twice-line-frequency energy to be buffered at high voltage with a large voltage swing, enabling reduction in the energy buffer capacitor size and the elimination of electrolytic capacitors. While this architecture can be beneficial with a variety of converter topologies, it is especially suited for the system miniaturization by enabling designs that operate at high frequency (HF, 3-30 MHz). Moreover, we introduce circuit implementations that provide efficient operation in this range. The proposed approach is demonstrated for an LED driver converter operating at a (variable) HF switching frequency (3-10 MHz) from 120 Vac, and supplying a 35 Vdc output at up to 30 W. The prototype converter achieves high efficiency (92%) and power factor (0.89), and maintains a good performance over a wide load range. Owing to the architecture and HF operation, the prototype achieves a high “box” power density of 50 W/in3 (“displacement” power density of 130 W/in3), with miniaturized inductors, ceramic energy buffer capacitors, and a small-volume EMI filter.", "title": "" }, { "docid": "fe5377214840549fbbb6ad520592191d", "text": "The ability to exert an appropriate amount of force on brain tissue during surgery is an important component of instrument handling. It allows surgeons to achieve the surgical objective effectively while maintaining a safe level of force in tool-tissue interaction. At the present time, this knowledge, and hence skill, is acquired through experience and is qualitatively conveyed from an expert surgeon to trainees. These forces can be assessed quantitatively by retrofitting surgical tools with sensors, thus providing a mechanism for improved performance and safety of surgery, and enhanced surgical training. This paper presents the development of a force-sensing bipolar forceps, with installation of a sensory system, that is able to measure and record interaction forces between the forceps tips and brain tissue in real time. This research is an extension of a previous research where a bipolar forceps was instrumented to measure dissection and coagulation forces applied in a single direction. Here, a planar forceps with two sets of strain gauges in two orthogonal directions was developed to enable measuring the forces with a higher accuracy. Implementation of two strain gauges allowed compensation of strain values due to deformations of the forceps in other directions (axial stiffening) and provided more accurate forces during microsurgery. An experienced neurosurgeon performed five neurosurgical tasks using the axial setup and repeated the same tasks using the planar device. The experiments were performed on cadaveric brains. Both setups were shown to be capable of measuring real-time interaction forces. Comparing the two setups, under the same experimental condition, indicated that the peak and mean forces quantified by planar forceps were at least 7% and 10% less than those of axial tool, respectively; therefore, utilizing readings of all strain gauges in planar forceps provides more accurate values of both peak and mean forces than axial forceps. Cross-correlation analysis between the two force signals obtained, one from each cadaveric practice, showed a high similarity between the two force signals.", "title": "" }, { "docid": "a6d4b6a0cd71a8e64c9a2429b95cd7da", "text": "Creativity research has traditionally focused on human creativity, and even more specifically, on the psychology of individual creative people. In contrast, computational creativity research involves the development and evaluation of creativity in a computational system. As we study the effect of scaling up from the creativity of a computational system and individual people to large numbers of diverse computational agents and people, we have a new perspective: creativity can ascribed to a computational agent, an individual person, collectives of people and agents and/or their interaction. By asking “Who is being creative?” this paper examines the source of creativity in computational and collective creativity. A framework based on ideation and interaction provides a way of characterizing existing research in computational and collective creativity and identifying directions for future research. Human and Computational Creativity Creativity is a topic of philosophical and scientific study considering the scenarios and human characteristics that facilitate creativity as well as the properties of computational systems that exhibit creative behavior. “The four Ps of creativity”, as introduced in Rhodes (1987) and more recently summarized by Runco (2011), decompose the complexity of creativity into separate but related influences: • Person: characteristics of the individual, • Product: an outcome focus on ideas, • Press: the environmental and contextual factors, • Process: cognitive process and thinking techniques. While the four Ps are presented in the context of the psychology of human creativity, they can be modified for computational creativity if process includes a computational process. The study of human creativity has a focus on the characteristics and cognitive behavior of creative people and the environments in which creativity is facilitated. The study of computational creativity, while inspired by concepts of human creativity, is often expressed in the formal language of search spaces and algorithms. Why do we ask who is being creative? Firstly, there is an increasing interest in understanding computational systems that can formalize or model creative processes and therefore exhibit creative behaviors or acts. Yet there are still skeptics that claim computers aren’t creative, the computer is just following instructions. Second and in contrast, there is increasing interest in computational systems that encourage and enhance human creativity that make no claims about whether the computer is being or could be creative. Finally, as we develop more capable socially intelligent computational systems and systems that enable collective intelligence among humans and computers, the boundary between human creativity and computer creativity blurs. As the boundary blurs, we need to develop ways of recognizing creativity that makes no assumptions about whether the creative entity is a person, a computer, a potentially large group of people, or the collective intelligence of human and computational entities. This paper presents a framework that characterizes the source of creativity from two perspectives, ideation and interaction, as a guide to current and future research in computational and collective creativity. Creativity: Process and Product Understanding the nature of creativity as process and product is critical in computational creativity if we want to avoid any bias that only humans are creative and computers are not. While process and product in creativity are tightly coupled in practice, a distinction between the two provides two ways of recognizing computational creativity by describing the characteristics of a creative process and separately, the characteristics of a creative product. Studying and describing the processes that generate creative products focus on the cognitive behavior of a creative person or the properties of a computational system, and describing ways of recognizing a creative product focus on the characteristics of the result of a creative process. When describing creative processes there is an assumption that there is a space of possibilities. Boden (2003) refers to this as conceptual spaces and describes these spaces as structured styles of thought. In computational systems such a space is called a state space. How such spaces are changed, or the relationship between the set of known products, the space of possibilities, and the potentially creative product, is the basis for describing processes that can generate potentially creative artifacts. There are many accounts of the processes for generating creative products. Two sources are described here: Boden (2003) from the philosophical and artificial intelligence perspective and Gero (2000) from the design science perspective. Boden (2003) describes three ways in which creative products can be generated: combination, exploration, International Conference on Computational Creativity 2012 67 and transformation: each one describes the way in which the conceptual space of known products provides a basis for generating a creative product and how the conceptual space changes as a result of the creative artifact. Combination brings together two or more concepts in ways that hasn’t occurred in existing products. Exploration finds concepts in parts of the space that have not been considered in existing products. Transformation modifies concepts in the space to generate products that change the boundaries of the space. Gero (2000) describes computational processes for creative design as combination, transformation, analogy, emergence, and first principles. Combination and transformation are similar to Boden’s processes. Analogy transfers concepts from a source product that may be in a different conceptual space to a target product to generate a novel product in the target’s space. Emergence is a process that finds new underlying structures in a concept that give rise to a new product, effectively a re-representation process. First principles as a process generates new products without relying on concepts as defined in existing products. While these processes provide insight into the nature of creativity and provide a basis for computational creativity, they have little to say about how we recognize a creative product. As we move towards computational systems that enhance or contribute to human creativity, the articulation of process models for generating creative artifacts does not provide an evaluation of the product. Computational systems that generate creative products need evaluation criteria that are independent of the process by which the product was generated. There are also numerous approaches to defining characteristics of creative products as the basis for evaluating or assessing creativity. Boden (2003) claims that novelty and value are the essential criteria and that other aspects, such as surprise, are kinds of novelty or value. Wiggins (2006) often uses value to indicate all valuable aspects of a creative products, yet provides definitions for novelty and value as different features that are relevant to creativity. Oman and Tumer (2009) combine novelty and quality to evaluate individual ideas in engineering design as a relative measure of creativity. Shah, Smith, and Vargas-Hernandez (2003) associate creative design with ideation and develop metrics for novelty, variety, quality, and quantity of ideas. Wiggins (2006) argues that surprise is a property of the receiver of a creative artifact, that is, it is an emotional response. Cropley and Cropley (2005) propose four broad properties of products that can be used to describe the level and kind of creativity they possess: effectiveness, novelty, elegance, genesis. Besemer and O'Quin (1987) describe a Creative Product Semantic Scale which defines the creativity of products in three dimensions: novelty (the product is original, surprising and germinal), resolution (the product is valuable, logical, useful, and understandable), and elaboration and synthesis (the product is organic, elegant, complex, and well-crafted). Horn and Salvendy (2006) after doing an analysis of many properties of creative products, report on consumer perception of creativity in three critical perceptions: affect (our emotional response to the product), importance, and novelty. Goldenberg and Mazursky (2002) report on research that has found the observable characteristics of creativity in products to include \"original, of value, novel, interesting, elegant, unique, surprising.\" Amabile (1982) says it most clearly when she summarizes the social psychology literature on the assessment of creativity: While most definitions of creativity refer to novelty, appropriateness, and surprise, current creativity tests or assessment techniques are not closely linked to these criteria. She further argues that “There is no clear, explicit statement of the criteria that conceptually underlie the assessment procedures.” In response to an inability to establish and define criteria for evaluating creativity that is acceptable to all domains, Amabile (1982, 1996) introduced a Consensual Assessment Technique (CAT) in which creativity is assessed by a group of judges that are knowledgeable of the field. Since then, several scales for assisting human evaluators have been developed to guide human evaluators, for example, Besemer and O'Quin's (1999) Creative Product Semantic Scale, Reis and Renzulli's (1991) Student Product Assessment Form, and Cropley et al’s (2011) Creative Solution Diagnosis Scale. Maher (2010) presents an AI approach to evaluating creativity of a product by measuring novelty, value and surprise that provides a formal model for evaluating creative products. Novelty is a measure of how different the product is from existing products and is measured as a distance from clusters of other products in a conceptual space, characterizing the artifact as similar but different. Value is a measure of how the creative product co", "title": "" }, { "docid": "c8453255bf200ed841229f5e637b2074", "text": "One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a ‘‘model discrepancy’’ term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c93c0966ef744722d58bbc9170e9a8ab", "text": "Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm.", "title": "" }, { "docid": "1b30c14536db1161b77258b1ce213fbb", "text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.", "title": "" }, { "docid": "2bdf2abea3e137645f53d8a9b36327ad", "text": "The use of a general-purpose code, COLSYS, is described. The code is capable of solving mixed-order systems of boundary-value problems in ordinary differential equations. The method of spline collocation at Gaussian points is implemented using a B-spline basis. Approximate solutions are computed on a sequence of automatically selected meshes until a user-specified set of tolerances is satisfied. A damped Newton's method is used for the nonlinear iteration. The code has been found to be particularly effective for difficult problems. It is intended that a user be able to use COLSYS easily after reading its algorithm description. The use of the code is then illustrated by examples demonstrating its effectiveness and capabilities.", "title": "" }, { "docid": "2df35b05a40a646ba6f826503955601a", "text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.", "title": "" }, { "docid": "57d40d18977bc332ba16fce1c3cf5a66", "text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.", "title": "" }, { "docid": "d741b6f33ccfae0fc8f4a79c5c8aa9cb", "text": "A nonlinear optimal controller with a fuzzy gain scheduler has been designed and applied to a Line-Of-Sight (LOS) stabilization system. Use of Linear Quadratic Regulator (LQR) theory is an optimal and simple manner of solving many control engineering problems. However, this method cannot be utilized directly for multigimbal LOS systems since they are nonlinear in nature. To adapt LQ controllers to nonlinear systems at least a linearization of the model plant is required. When the linearized model is only valid within the vicinity of an operating point a gain scheduler is required. Therefore, a Takagi-Sugeno Fuzzy Inference System gain scheduler has been implemented, which keeps the asymptotic stability performance provided by the optimal feedback gain approach. The simulation results illustrate that the proposed controller is capable of overcoming disturbances and maintaining a satisfactory tracking performance. Keywords—Fuzzy Gain-Scheduling, Gimbal, Line-Of-Sight Stabilization, LQR, Optimal Control", "title": "" }, { "docid": "2950e3c1347c4adeeb2582046cbea4b8", "text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.", "title": "" }, { "docid": "3fd6a5960d40fa98051f7178b1abb8bd", "text": "On average, resource-abundant countries have experienced lower growth over the last four decades than their resource-poor counterparts. But the most interesting aspect of the paradox of plenty is not the average effect of natural resources, but its variation. For every Nigeria or Venezuela there is a Norway or a Botswana. Why do natural resources induce prosperity in some countries but stagnation in others? This paper gives an overview of the dimensions along which resource-abundant winners and losers differ. In light of this, it then discusses different theory models of the resource curse, with a particular emphasis on recent developments in political economy.", "title": "" } ]
scidocsrr
f13aed0918913cda0bc7bd425da0422e
CAML: Fast Context Adaptation via Meta-Learning
[ { "docid": "e28ab50c2d03402686cc9a465e1231e7", "text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.", "title": "" } ]
[ { "docid": "e8cf458c60dc7b4a8f71df2fabf1558d", "text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.", "title": "" }, { "docid": "577e5f82a0a195b092d7a15df110bd96", "text": "We propose a powerful new tool for conducting research on computational intelligence and games. `PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.", "title": "" }, { "docid": "39d6a07bc7065499eb4cb0d8adb8338a", "text": "This paper proposes a DNS Name Autoconfiguration (called DNSNA) for not only the global DNS names, but also the local DNS names of Internet of Things (IoT) devices. Since there exist so many devices in the IoT environments, it is inefficient to manually configure the Domain Name System (DNS) names of such IoT devices. By this scheme, the DNS names of IoT devices can be autoconfigured with the device's category and model in IPv6-based IoT environments. This DNS name lets user easily identify each IoT device for monitoring and remote-controlling in IoT environments. In the procedure to generate and register an IoT device's DNS name, the standard protocols of Internet Engineering Task Force (IETF) are used. Since the proposed scheme resolves an IoT device's DNS name into an IPv6 address in unicast through an authoritative DNS server, it generates less traffic than Multicast DNS (mDNS), which is a legacy DNS application for the DNS name service in IoT environments. Thus, the proposed scheme is more appropriate in global IoT networks than mDNS. This paper explains the design of the proposed scheme and its service scenario, such as smart road and smart home. The results of the simulation prove that our proposal outperforms the legacy scheme in terms of energy consumption.", "title": "" }, { "docid": "2e89bc59f85b14cf40a868399a3ce351", "text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.", "title": "" }, { "docid": "6981598efd4a70f669b5abdca47b7ea1", "text": "The in-flight alignment is a critical stage for airborne inertial navigation system/Global Positioning System (INS/GPS) applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain a satisfying performance. Due to the airborne dynamics, the in-flight alignment is much more difficult than the alignment on the ground. An optimization-based coarse alignment approach that uses GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae is proposed. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to 1 deg accuracy in 10 s. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.", "title": "" }, { "docid": "05b4df16c35a89ee2a5b9ac482e0a297", "text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.", "title": "" }, { "docid": "83224037f402a44cf7f819acbb91d69f", "text": "Chinese word segmentation (CWS) is an important task for Chinese NLP. Recently, many neural network based methods have been proposed for CWS. However, these methods require a large number of labeled sentences for model training, and usually cannot utilize the useful information in Chinese dictionary. In this paper, we propose two methods to exploit the dictionary information for CWS. The first one is based on pseudo labeled data generation, and the second one is based on multi-task learning. The experimental results on two benchmark datasets validate that our approach can effectively improve the performance of Chinese word segmentation, especially when training data is insufficient.", "title": "" }, { "docid": "7fdc12cbaa29b1f59d2a850a348317b7", "text": "Arhinia is a rare condition characterised by the congenital absence of nasal structures, with different patterns of presentation, and often associated with other craniofacial or somatic anomalies. To date, about 30 surviving cases have been reported. We report the case of a female patient aged 6 years, who underwent internal and external nose reconstruction using a staged procedure: a nasal airway was obtained through maxillary osteotomy and ostectomy, and lined with a local skin flap and split-thickness skin grafts; then the external nose was reconstructed with an expanded frontal flap, armed with an autogenous rib framework.", "title": "" }, { "docid": "c10829be320a9be6ecbc9ca751e8b56e", "text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.", "title": "" }, { "docid": "6ef52ad99498d944e9479252d22be9c8", "text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.", "title": "" }, { "docid": "0c891acac99279cff995a7471ea9aaff", "text": "The mainstay of diagnosis for Treponema pallidum infections is based on nontreponemal and treponemal serologic tests. Many new diagnostic methods for syphilis have been developed, using specific treponemal antigens and novel formats, including rapid point-of-care tests, enzyme immunoassays, and chemiluminescence assays. Although most of these newer tests are not yet cleared for use in the United States by the Food and Drug Administration, their performance and ease of automation have promoted their application for syphilis screening. Both sensitive and specific, new screening tests detect antitreponemal IgM and IgG antibodies by use of wild-type or recombinant T. pallidum antigens. However, these tests cannot distinguish between recent and remote or treated versus untreated infections. In addition, the screening tests require confirmation with nontreponemal tests. This use of treponemal tests for screening and nontreponemal serologic tests as confirmatory tests is a reversal of long-held practice. Clinicians need to understand the science behind these tests to use them properly in syphilis management.", "title": "" }, { "docid": "34a21bf5241d8cc3a7a83e78f8e37c96", "text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.", "title": "" }, { "docid": "a0ff157e543d7944a4a83c95dd0da7b3", "text": "This paper provides a review on some of the significant research work done on abstractive text summarization. The process of generating the summary from one or more text corpus, by keeping the key points in the corpus is called text summarization. The most prominent technique in text summarization is an abstractive and extractive method. The extractive summarization is purely based on the algorithm and it just copies the most relevant sentence/words from the input text corpus and creating the summary. An abstractive method generates new sentences/words that may/may not be in the input corpus. This paper focuses on the abstractive text summarization. This paper explains the overview of the various processes in abstractive text summarization. It includes data processing, word embedding, basic model architecture, training, and validation process and the paper narrates the current research in this field. It includes different types of architectures, attention mechanism, supervised and reinforcement learning, the pros and cons of different architecture. Systematic comparison of different text summarization models will provide the future direction of text summarization.", "title": "" }, { "docid": "4318041c3cf82ce72da5983f20c6d6c4", "text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.", "title": "" }, { "docid": "5691ca09e609aea46b9fd5e7a83d165a", "text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.", "title": "" }, { "docid": "370b416dd51cfc08dc9b97f87c500eba", "text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x þ y þ z þ w 1⁄4 1 2 ðx þ y þ z þ wÞ: Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by Corresponding author. E-mail addresses: [email protected] (R.L. Graham), [email protected] (J.C. Lagarias), colinm@ research.avayalabs.com (C.L. Mallows), [email protected] (A.R. Wilks), catherine.yan@math. tamu.edu (C.H. Yan). 1 Current address: Department of Computer Science, University of California at San Diego, La Jolla, CA 92093, USA. 2 Work partly done during a visit to the Institute for Advanced Study. 3 Current address: Avaya Labs, Basking Ridge, NJ 07920, USA. 0022-314X/03/$ see front matter r 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0022-314X(03)00015-5 congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple. r 2003 Elsevier Science (USA). All rights reserved.", "title": "" }, { "docid": "5988ef7f9c5b8dd125c78c39f26d5a70", "text": "Diagnosis Related Group (DRG) upcoding is an anomaly in healthcare data that costs hundreds of millions of dollars in many developed countries. DRG upcoding is typically detected through resource intensive auditing. As supervised modeling of DRG upcoding is severely constrained by scope and timeliness of past audit data, we propose in this paper an unsupervised algorithm to filter data for potential identification of DRG upcoding. The algorithm has been applied to a hip replacement/revision dataset and a heart-attack dataset. The results are consistent with the assumptions held by domain experts.", "title": "" }, { "docid": "e4b02298a2ff6361c0a914250f956911", "text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.", "title": "" }, { "docid": "16eff9f2b7626f53baa95463f18d518a", "text": "The need for fine-grained power management in digital ICs has led to the design and implementation of compact, scalable low-drop out regulators (LDOs) embedded deep within logic blocks. While analog LDOs have traditionally been used in digital ICs, the need for digitally implementable LDOs embedded in digital functional units for ultrafine grained power management is paramount. This paper presents a fully-digital, phase locked LDO implemented in 32 nm CMOS. The control model of the proposed design has been provided and limits of stability have been shown. Measurement results with a resistive load as well as a digital load exhibit peak current efficiency of 98%.", "title": "" }, { "docid": "0e8efa2e84888547a1a4502883316a7a", "text": "Conservation and sustainable management of wetlands requires participation of local stakeholders, including communities. The Bigodi Wetland is unusual because it is situated in a common property landscape but the local community has been running a successful community-based natural resource management programme (CBNRM) for the wetland for over a decade. Whilst external visitors to the wetland provide ecotourism revenues we sought to quantify community benefits through the use of wetland goods such as firewood, plant fibres, and the like, and costs associated with wild animals damaging farming activities. We interviewed 68 households living close to the wetland and valued their cash and non-cash incomes from farming and collection of non-timber forest products (NTFPs) and water. The majority of households collected a wide variety of plant and fish resources and water from the wetland for household use and livestock. Overall, 53% of total household cash and non-cash income was from collected products, mostly the wetland, 28% from arable agriculture, 12% from livestock and 7% from employment and cash transfers. Female-headed households had lower incomes than male-headed ones, and with a greater reliance on NTFPs. Annual losses due to wildlife damage were estimated at 4.2% of total gross income. Most respondents felt that the wetland was important for their livelihoods, with more than 80% identifying health, education, craft materials and firewood as key benefits. Ninety-five percent felt that the wetland was in a good condition and that most residents observed the agreed CBNRM rules regarding use of the wetland. This study confirms the success of the locally run CBNRM processes underlying the significant role that the wetland plays in local livelihoods.", "title": "" } ]
scidocsrr
fb3ec739ae67416aa9f0feacf4d301c9
Computational Technique for an Efficient Classification of Protein Sequences With Distance-Based Sequence Encoding Algorithm
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" } ]
[ { "docid": "d8042183e064ffba69b54246b17b9ff4", "text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.", "title": "" }, { "docid": "69d3c943755734903b9266ca2bd2fad1", "text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.", "title": "" }, { "docid": "a2cf369a67507d38ac1a645e84525497", "text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.", "title": "" }, { "docid": "60ac1fa826816d39562104849fff8f46", "text": "The increased attention to environmentalism in western societies has been accompanied by a rise in ecotourism, i.e. ecologically sensitive travel to remote areas to learn about ecosystems, as well as in cultural tourism, focusing on the people who are a part of ecosystems. Increasingly, the internet has partnered with ecotourism companies to provide information about destinations and facilitate travel arrangements. This study reviews the literature linking ecotourism and sustainable development, as well as prior research showing that cultures have been historically commodified in tourism advertising for developing countries destinations. We examine seven websites advertising ecotourism and cultural tourism and conclude that: (1) advertisements for natural and cultural spaces are not always consistent with the discourse of sustainability; and (2) earlier critiques of the commodification of culture in print advertising extend to internet advertising also.", "title": "" }, { "docid": "46170fe683c78a767cb15c0ac3437e83", "text": "Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.", "title": "" }, { "docid": "3a58c1a2e4428c0b875e1202055e5b13", "text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" }, { "docid": "918bf13ef0289eb9b78309c83e963b26", "text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.", "title": "" }, { "docid": "640fd96e02d8aa69be488323f77b40ba", "text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.", "title": "" }, { "docid": "aa3c0d7d023e1f9795df048ee44d92ec", "text": "Correspondence Institute of Computer Science, University of Tartu, Juhan Liivi 2, 50409 Tartu, Estonia Email: [email protected] Summary Blockchain platforms, such as Ethereum, allow a set of actors to maintain a ledger of transactions without relying on a central authority and to deploy scripts, called smart contracts, that are executedwhenever certain transactions occur. These features can be used as basic building blocks for executing collaborative business processes between mutually untrusting parties. However, implementing business processes using the low-level primitives provided by blockchain platforms is cumbersome and error-prone. In contrast, established business process management systems, such as those based on the standard Business Process Model and Notation (BPMN), provide convenient abstractions for rapid development of process-oriented applications. This article demonstrates how to combine the advantages of a business process management system with those of a blockchain platform. The article introduces a blockchain-based BPMN execution engine, namely Caterpillar. Like any BPMN execution engine, Caterpillar supports the creation of instances of a process model and allows users to monitor the state of process instances and to execute tasks thereof. The specificity of Caterpillar is that the state of each process instance is maintained on the (Ethereum) blockchain and the workflow routing is performed by smart contracts generated by a BPMN-to-Solidity compiler. The Caterpillar compiler supports a large array of BPMN constructs, including subprocesses, multi-instances activities and event handlers. The paper describes the architecture of Caterpillar, and the interfaces it provides to support the monitoring of process instances, the allocation and execution of work items, and the execution of service tasks.", "title": "" }, { "docid": "8e082f030aa5c5372fe327d4291f1864", "text": "The Internet of Things (IoT) describes the interconnection of objects (or Things) for various purposes including identification, communication, sensing, and data collection. “Things” in this context range from traditional computing devices like Personal Computers (PC) to general household objects embedded with capabilities for sensing and/or communication through the use of technologies such as Radio Frequency Identification (RFID). This conceptual paper, from a philosophical viewpoint, introduces an initial set of guiding principles also referred to in the paper as commandments that can be applied by all the stakeholders involved in the IoT during its introduction, deployment and thereafter. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]", "title": "" }, { "docid": "f376948c1b8952b0b19efad3c5ca0471", "text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …", "title": "" }, { "docid": "7d68eaf1d9916b0504ac13f5ff9ef980", "text": "The success of Bitcoin largely relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here essentially means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system. We prove, in this paper, that blockchain is actually unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is at least exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities.", "title": "" }, { "docid": "01165a990d16000ac28b0796e462147a", "text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.", "title": "" }, { "docid": "71bafd4946377eaabff813bffd5617d7", "text": "Autumn-seeded winter cereals acquire tolerance to freezing temperatures and become vernalized by exposure to low temperature (LT). The level of accumulated LT tolerance depends on the cold acclimation rate and factors controlling timing of floral transition at the shoot apical meristem. In this study, genomic loci controlling the floral transition time were mapped in a winter wheat (T. aestivum L.) doubled haploid (DH) mapping population segregating for LT tolerance and rate of phenological development. The final leaf number (FLN), days to FLN, and days to anthesis were determined for 142 DH lines grown with and without vernalization in controlled environments. Analysis of trait data by composite interval mapping (CIM) identified 11 genomic regions that carried quantitative trait loci (QTLs) for the developmental traits studied. CIM analysis showed that the time for floral transition in both vernalized and non-vernalized plants was controlled by common QTL regions on chromosomes 1B, 2A, 2B, 6A and 7A. A QTL identified on chromosome 4A influenced floral transition time only in vernalized plants. Alleles of the LT-tolerant parent, Norstar, delayed floral transition at all QTLs except at the 2A locus. Some of the QTL alleles delaying floral transition also increased the length of vegetative growth and delayed flowering time. The genes underlying the QTLs identified in this study encode factors involved in regional adaptation of cold hardy winter wheat.", "title": "" }, { "docid": "1865a404c970d191ed55e7509b21fb9e", "text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1", "title": "" }, { "docid": "7ad00ade30fad561b4caca2fb1326ed8", "text": "Today, digital games are available on a variety of mobile devices, such as tablet devices, portable game consoles and smart phones. Not only that, the latest mixed reality technology on mobile devices allows mobile games to integrate the real world environment into gameplay. However, little has been done to test whether the surroundings of play influence gaming experience. In this paper, we describe two studies done to test the effect of surroundings on immersion. Study One uses mixed reality games to investigate whether the integration of the real world environment reduces engagement. Whereas Study Two explored the effects of manipulating the lighting level, and therefore reducing visibility, of the surroundings. We found that immersion is reduced in the conditions where visibility of the surroundings is high. We argue that higher awareness of the surroundings has a strong impact on gaming experience.", "title": "" }, { "docid": "afe1be9e13ca6e2af2c5177809e7c893", "text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].", "title": "" }, { "docid": "f284c6e32679d8413e366d2daf1d4613", "text": "Summary form only given. Existing studies on ensemble classifiers typically take a static approach in assembling individual classifiers, in which all the important features are specified in advance. In this paper, we propose a new concept, dynamic ensemble, as an advanced classifier that could have dynamic component classifiers and have dynamic configurations. Toward this goal, we have substantially expanded the existing \"overproduce and choose\" paradigm for ensemble construction. A new algorithm called BAGA is proposed to explore this approach. Taking a set of decision tree component classifiers as input, BAGA generates a set of candidate ensembles using combined bagging and genetic algorithm techniques so that component classifiers are determined at execution time. Empirical studies have been carried out on variations of the BAGA algorithm, where the sizes of chosen classifiers, effects of bag size, voting function and evaluation functions on the dynamic ensemble construction, are investigated.", "title": "" }, { "docid": "8e74a27a3edea7cf0e88317851bc15eb", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
scidocsrr
37a76d3b6c71ef173133d68ba0809244
Printflatables: Printing Human-Scale, Functional and Dynamic Inflatable Objects
[ { "docid": "bf83b9fef9b4558538b2207ba57b4779", "text": "This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.", "title": "" } ]
[ { "docid": "f136e875f021ea3ea67a87c6d0b1e869", "text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.", "title": "" }, { "docid": "2ce4d585edd54cede6172f74cf9ab8bb", "text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.", "title": "" }, { "docid": "64c1c37422037fc9156db42cdcdbe7fe", "text": "[Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirements engineering is a known cause for project failures. While agile development projects often manage well without extensive requirements test cases are commonly viewed as requirements and detailed requirements are documented as test cases. [Objective] We have investigated this agile practice of using test cases as requirements to understand how test cases can support the main requirements activities, and how this practice varies. [Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2 focus groups. [Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating, verifying, and managing requirements, and when used as a documented agreement. We have identified five variants of the test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict and stand-alone manual for which the application of the practice varies concerning the time frame of requirements documentation, the requirements format, the extent to which the test cases are a machine executable specification and the use of tools which provide specific support for the practice of using test cases as requirements. [Conclusions] The findings provide empirical insight into how agile development projects manage and communicate requirements. The identified variants of the practice of using test cases as requirements can be used to perform in-depth investigations into agile requirements engineering. Practitioners can use the provided recommendations as a guide in designing and improving their agile requirements practices based on project characteristics such as number of stakeholders and rate of change.", "title": "" }, { "docid": "b169e0e76f26db1f08cd84524aa10a53", "text": "A very lightweight, broad-band, dual polarized antenna array with 128 elements for the frequency range from 7 GHz to 18 GHz has been designed, manufactured and measured. The total gain at the center frequency was measured to be 20 dBi excluding feeding network losses.", "title": "" }, { "docid": "9520b99708d905d3713867fac14c3814", "text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.", "title": "" }, { "docid": "910a416dc736ec3566583c57123ac87c", "text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman [email protected] 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.", "title": "" }, { "docid": "dac5cebcbc14b82f7b8df977bed0c9d8", "text": "While blockchain services hold great promise to improve many different industries, there are significant cybersecurity concerns which must be addressed. In this paper, we investigate security considerations for an Ethereum blockchain hosting a distributed energy management application. We have simulated a microgrid with ten buildings in the northeast U.S., and results of the transaction distribution and electricity utilization are presented. We also present the effects on energy distribution when one or two smart meters have their identities corrupted. We then propose a new approach to digital identity management that would require smart meters to authenticate with the blockchain ledger and mitigate identity-spoofing attacks. Applications of this approach to defense against port scans and DDoS, attacks are also discussed.", "title": "" }, { "docid": "e5bf05ae6700078dda83eca8d2f65cd4", "text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.", "title": "" }, { "docid": "c1fecb605dcabbd411e3782c15fd6546", "text": "Neuropathic pain is a debilitating form of chronic pain that affects 6.9-10% of the population. Health-related quality-of-life is impeded by neuropathic pain, which not only includes physical impairment, but the mental wellbeing of the patient is also hindered. A reduction in both physical and mental wellbeing bares economic costs that need to be accounted for. A variety of medications are in use for the treatment of neuropathic pain, such as calcium channel α2δ agonists, serotonin/noradrenaline reuptake inhibitors and tricyclic antidepressants. However, recent studies have indicated a lack of efficacy regarding the aforementioned medication. There is increasing clinical and pre-clinical evidence that can point to the use of ketamine, an “old” anaesthetic, in the management of neuropathic pain. Conversely, to see ketamine being used in neuropathic pain, there needs to be more conclusive evidence exploring the long-term effects of sub-anesthetic ketamine.", "title": "" }, { "docid": "5b463701f83f7e6651260c8f55738146", "text": "Heart disease diagnosis is a complex task which requires much experience and knowledge. Traditional way of predicting Heart disease is doctor’s examination or number of medical tests such as ECG, Stress Test, and Heart MRI etc. Nowadays, Health care industry contains huge amount of heath care data, which contains hidden information. This hidden information is useful for making effective decisions. Computer based information along with advanced Data mining techniques are used for appropriate results. Neural network is widely used tool for predicting Heart disease diagnosis. In this research paper, a Heart Disease Prediction system (HDPS) is developed using Neural network. The HDPS system predicts the likelihood of patient getting a Heart disease. For prediction, the system uses sex, blood pressure, cholesterol like 13 medical parameters. Here two more parameters are added i.e. obesity and smoking for better accuracy. From the results, it has been seen that neural network predict heart disease with nearly 100% accuracy.", "title": "" }, { "docid": "a2f1a10c0e89f6d63f493c267759fb8f", "text": "BACKGROUND\nPatient portals tied to provider electronic health record (EHR) systems are increasingly popular.\n\n\nPURPOSE\nTo systematically review the literature reporting the effect of patient portals on clinical care.\n\n\nDATA SOURCES\nPubMed and Web of Science searches from 1 January 1990 to 24 January 2013.\n\n\nSTUDY SELECTION\nHypothesis-testing or quantitative studies of patient portals tethered to a provider EHR that addressed patient outcomes, satisfaction, adherence, efficiency, utilization, attitudes, and patient characteristics, as well as qualitative studies of barriers or facilitators, were included.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted data and addressed discrepancies through consensus discussion.\n\n\nDATA SYNTHESIS\nFrom 6508 titles, 14 randomized, controlled trials; 21 observational, hypothesis-testing studies; 5 quantitative, descriptive studies; and 6 qualitative studies were included. Evidence is mixed about the effect of portals on patient outcomes and satisfaction, although they may be more effective when used with case management. The effect of portals on utilization and efficiency is unclear, although patient race and ethnicity, education level or literacy, and degree of comorbid conditions may influence use.\n\n\nLIMITATION\nLimited data for most outcomes and an absence of reporting on organizational and provider context and implementation processes.\n\n\nCONCLUSION\nEvidence that patient portals improve health outcomes, cost, or utilization is insufficient. Patient attitudes are generally positive, but more widespread use may require efforts to overcome racial, ethnic, and literacy barriers. Portals represent a new technology with benefits that are still unclear. Better understanding requires studies that include details about context, implementation factors, and cost.", "title": "" }, { "docid": "1eef21abdf14dc430b333cac71d4fe07", "text": "The authors have developed an adaptive matched filtering algorithm based upon an artificial neural network (ANN) for QRS detection. They use an ANN adaptive whitening filter to model the lower frequencies of the electrocardiogram (ECG) which are inherently nonlinear and nonstationary. The residual signal which contains mostly higher frequency QRS complex energy is then passed through a linear matched filter to detect the location of the QRS complex. The authors developed an algorithm to adaptively update the matched filter template from the detected QRS complex in the ECG signal itself so that the template can be customized to an individual subject. This ANN whitening filter is very effective at removing the time-varying, nonlinear noise characteristic of ECG signals. The detection rate for a very noisy patient record in the MIT/BIH arrhythmia database is 99.5% with this approach, which compares favorably to the 97.5% obtained using a linear adaptive whitening filter and the 96.5% achieved with a bandpass filtering method.<<ETX>>", "title": "" }, { "docid": "a0d4089e55a0a392a2784ae50b6fa779", "text": "Organizations place a great deal of emphasis on hiring individuals who are a good fit for the organization and the job. Among the many ways that individuals are screened for a job, the employment interview is particularly prevalent and nearly universally used (Macan, 2009; Huffcutt and Culbertson, 2011). This Research Topic is devoted to a construct that plays a critical role in our understanding of job interviews: impression management (IM). In the interview context, IM describes behaviors an individual uses to influence the impression that others have of them (Bozeman and Kacmar, 1997). For instance, a job applicant can flatter an interviewer to be seen as likable (i.e., ingratiation), play up their qualifications and abilities to be seen as competent (i.e., self-promotion), or utilize excuses or justifications to make up for a negative event or error (i.e., defensive IM; Ellis et al., 2002). IM has emerged as a central theme in the interview literature over the last several decades (for reviews, see Posthuma et al., 2002; Levashina et al., 2014). Despite some pioneering early work (e.g., Schlenker, 1980; Leary and Kowalski, 1990; Stevens and Kristof, 1995), there has been a resurgence of interest in the area over the last decade. While the literature to date has set up a solid foundational knowledge about interview IM, there are a number of emerging trends and directions. In the following, we lay out some critical areas of inquiry in interview IM, and highlight how the innovative set of papers in this Research Topic is illustrative of these new directions.", "title": "" }, { "docid": "5fbb54e63158066198cdf59e1a8e9194", "text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.", "title": "" }, { "docid": "0a16eb6bfb41a708e7a660cbf4c445af", "text": "Data from 1,010 lactating lactating, predominately component-fed Holstein cattle from 25 predominately tie-stall dairy farms in southwest Ontario were used to identify objective thresholds for defining hyperketonemia in lactating dairy cattle based on negative impacts on cow health, milk production, or both. Serum samples obtained during wk 1 and 2 postpartum and analyzed for beta-hydroxybutyrate (BHBA) concentrations that were used in analysis. Data were time-ordered so that the serum samples were obtained at least 1 d before the disease or milk recording events. Serum BHBA cutpoints were constructed at 200 micromol/L intervals between 600 and 2,000 micromol/L. Critical cutpoints for the health analysis were determined based on the threshold having the greatest sum of sensitivity and specificity for predicting the disease occurrence. For the production outcomes, models for first test day milk yield, milk fat, and milk protein percentage were constructed including covariates of parity, precalving body condition score, season of calving, test day linear score, and the random effect of herd. Each cutpoint was tested in these models to determine the threshold with the greatest impact and least risk of a type 1 error. Serum BHBA concentrations at or above 1,200 micromol/L in the first week following calving were associated with increased risks of subsequent displaced abomasum [odds ratio (OR) = 2.60] and metritis (OR = 3.35), whereas the critical threshold of BHBA in wk 2 postpartum on the risk of abomasal displacement was >or=1,800 micromol/L (OR = 6.22). The best threshold for predicting subsequent risk of clinical ketosis from serum obtained during wk 1 and wk 2 postpartum was 1,400 micromol/L of BHBA (OR = 4.25 and 5.98, respectively). There was no association between clinical mastitis and elevated serum BHBA in wk 1 or 2 postpartum, and there was no association between wk 2 BHBA and risk of metritis. Greater serum BHBA measured during the first and second week postcalving were associated with less milk yield, greater milk fat percentage, and less milk protein percentage on the first Dairy Herd Improvement test day of lactation. Impacts on first Dairy Herd Improvement test milk yield began at BHBA >or=1,200 micromol/L for wk 1 samples and >or=1,400 micromol/L for wk 2 samples. The greatest impact on yield occurred at 1,400 micromol/L (-1.88 kg/d) and 2,000 micromol/L (-3.3 kg/d) for sera from the first and second week postcalving, respectively. Hyperketonemia can be defined at 1,400 micromol/L of BHBA and in the first 2 wk postpartum increases disease risk and results in substantial loss of milk yield in early lactation.", "title": "" }, { "docid": "4c563b09a10ce0b444edb645ce411d42", "text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic", "title": "" }, { "docid": "9a30008cc270ac7a0bb1a0f12dca6187", "text": "Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.", "title": "" }, { "docid": "4b8f59d1b416d4869ae38dbca0eaca41", "text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.", "title": "" }, { "docid": "ec7b348a0fe38afa02989a22aa9dcac2", "text": "We propose a general framework for learning from labeled and unlabeled data on a directed graph in which the structure of the graph including the directionality of the edges is considered. The time complexity of the algorithm derived from this framework is nearly linear due to recently developed numerical techniques. In the absence of labeled instances, this framework can be utilized as a spectral clustering method for directed graphs, which generalizes the spectral clustering approach for undirected graphs. We have applied our framework to real-world web classification problems and obtained encouraging results.", "title": "" } ]
scidocsrr
0bd9c78ab4332552b8a0deee10c732db
Programming models for sensor networks: A survey
[ { "docid": "f3574f1e3f0ef3a5e1d20cb15b040105", "text": "Composed of tens of thousands of tiny devices with very limited resources (\"motes\"), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost.We present Maté, a tiny communication-centric virtual machine designed for sensor networks. Maté's high-level interface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Maté's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.", "title": "" } ]
[ { "docid": "0f3cad05c9c267f11c4cebd634a12c59", "text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.", "title": "" }, { "docid": "49fa638e44d13695217c7f1bbb3f6ebd", "text": "Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they cannot manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nyström low-rank approximation of kernel spaces. The resulting “kernelized” neural network achieves state-of-the-art accuracy in three different tasks.", "title": "" }, { "docid": "4b68d3c94ef785f80eac9c4c6ca28cfe", "text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.", "title": "" }, { "docid": "54b43b5e3545710dfe37f55b93084e34", "text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.", "title": "" }, { "docid": "ca8bb290339946e2d3d3e14c01023aa5", "text": "OBJECTIVE\nTo establish a centile chart of cervical length between 18 and 32 weeks of gestation in a low-risk population of women.\n\n\nMETHODS\nA prospective longitudinal cohort study of women with a low risk, singleton pregnancy using public healthcare facilities in Cape Town, South Africa. Transvaginal measurement of cervical length was performed between 16 and 32 weeks of gestation and used to construct centile charts. The distribution of cervical length was determined for gestational ages and was used to establish estimates of longitudinal percentiles. Centile charts were constructed for nulliparous and multiparous women together and separately.\n\n\nRESULTS\nCentile estimation was based on data from 344 women. Percentiles showed progressive cervical shortening with increasing gestational age. Averaged over the entire follow-up period, mean cervical length was 1.5 mm shorter in nulliparous women compared with multiparous women (95% CI, 0.4-2.6).\n\n\nCONCLUSIONS\nEstablishment of longitudinal reference values of cervical length in a low-risk population will contribute toward a better understanding of cervical length in women at risk for preterm labor.", "title": "" }, { "docid": "2d0cc17115692f1e72114c636ba74811", "text": "A new inline coupling topology for narrowband helical resonator filters is proposed that allows to introduce selectively located transmission zeros (TZs) in the stopband. We show that a pair of helical resonators arranged in an interdigital configuration can realize a large range of in-band coupling coefficient values and also selectively position a TZ in the stopband. The proposed technique dispenses the need for auxiliary elements, so that the size, complexity, power handling and insertion loss of the filter are not compromised. A second order prototype filter with dimensions of the order of 0.05λ, power handling capability up to 90 W, measured insertion loss of 0.18 dB and improved selectivity is presented.", "title": "" }, { "docid": "b5d3c7822f2ba9ca89d474dda5f180b6", "text": "We consider a class of a nested optimization problems involving inner and outer objectives. We observe that by taking into explicit account the optimization dynamics for the inner objective it is possible to derive a general framework that unifies gradient-based hyperparameter optimization and meta-learning (or learning-to-learn). Depending on the specific setting, the variables of the outer objective take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We show that some recently proposed methods in the latter setting can be instantiated in our framework and tackled with the same gradient-based algorithms. Finally, we discuss possible design patterns for learning-to-learn and present encouraging preliminary experiments for few-shot learning.", "title": "" }, { "docid": "d8752c40782d8189d454682d1d30738e", "text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.", "title": "" }, { "docid": "1461157186183f11d7270d89eecd926a", "text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.", "title": "" }, { "docid": "1a69b777e03d2d2589dd9efb9cda2a10", "text": "Three-dimensional measurement of joint motion is a promising tool for clinical evaluation and therapeutic treatment comparisons. Although many devices exist for joints kinematics assessment, there is a need for a system that could be used in routine practice. Such a system should be accurate, ambulatory, and easy to use. The combination of gyroscopes and accelerometers (i.e., inertial measurement unit) has proven to be suitable for unrestrained measurement of orientation during a short period of time (i.e., few minutes). However, due to their inability to detect horizontal reference, inertial-based systems generally fail to measure differential orientation, a prerequisite for computing the three-dimentional knee joint angle recommended by the Internal Society of Biomechanics (ISB). A simple method based on a leg movement is proposed here to align two inertial measurement units fixed on the thigh and shank segments. Based on the combination of the former alignment and a fusion algorithm, the three-dimensional knee joint angle is measured and compared with a magnetic motion capture system during walking. The proposed system is suitable to measure the absolute knee flexion/extension and abduction/adduction angles with mean (SD) offset errors of -1 degree (1 degree ) and 0 degrees (0.6 degrees ) and mean (SD) root mean square (RMS) errors of 1.5 degrees (0.4 degrees ) and 1.7 degrees (0.5 degrees ). The system is also suitable for the relative measurement of knee internal/external rotation (mean (SD) offset error of 3.4 degrees (2.7 degrees )) with a mean (SD) RMS error of 1.6 degrees (0.5 degrees ). The method described in this paper can be easily adapted in order to measure other joint angular displacements such as elbow or ankle.", "title": "" }, { "docid": "88def96b7287ce217f1abf8fb1b413a5", "text": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN’s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.", "title": "" }, { "docid": "2de3078c249eb87b041a2a74b6efcfdf", "text": "To lay the groundwork for devising, improving and implementing strategies to prevent or delay the onset of disability in the elderly, we conducted a systematic literature review of longitudinal studies published between 1985 and 1997 that reported statistical associations between individual base-line risk factors and subsequent functional status in community-living older persons. Functional status decline was defined as disability or physical function limitation. We used MEDLINE, PSYCINFO, SOCA, EMBASE, bibliographies and expert consultation to select the articles, 78 of which met the selection criteria. Risk factors were categorized into 14 domains and coded by two independent abstractors. Based on the methodological quality of the statistical analyses between risk factors and functional outcomes (e.g. control for base-line functional status, control for confounding, attrition rate), the strength of evidence was derived for each risk factor. The association of functional decline with medical findings was also analyzed. The highest strength of evidence for an increased risk in functional status decline was found for (alphabetical order) cognitive impairment, depression, disease burden (comorbidity), increased and decreased body mass index, lower extremity functional limitation, low frequency of social contacts, low level of physical activity, no alcohol use compared to moderate use, poor self-perceived health, smoking and vision impairment. The review revealed that some risk factors (e.g. nutrition, physical environment) have been neglected in past research. This review will help investigators set priorities for future research of the Disablement Process, plan health and social services for elderly persons and develop more cost-effective programs for preventing disability among them.", "title": "" }, { "docid": "96af2e34acf9f1e9c0c57cc24795d0f9", "text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.", "title": "" }, { "docid": "80c9f1d983bc3ddfd73cdf2abc936600", "text": "Jazz guitar solos are improvised melody lines played on one instrument on top of a chordal accompaniment (comping). As the improvisation happens spontaneously, a reference score is non-existent, only a lead sheet. There are situations, however, when one would like to have the original melody lines in the form of notated music, see the Real Book. The motivation is either for the purpose of practice and imitation or for musical analysis. In this work, an automatic transcriber for jazz guitar solos is developed. It resorts to a very intuitive representation of tonal music signals: the pitchgram. No instrument-specific modeling is involved, so the transcriber should be applicable to other pitched instruments as well. Neither is there the need to learn any note profiles prior to or during the transcription. Essentially, the proposed transcriber is a decision tree, thus a classifier, with a depth of 3. It has a (very) low computational complexity and can be run on-line. The decision rules can be refined or extended with no or little musical education. The transcriber’s performance is evaluated on a set of ten jazz solo excerpts and compared with a state-of-the-art transcription system for the guitar plus PYIN. We achieve an improvement of 34 % w.r.t. the reference system and 19 % w.r.t. PYIN in terms of the F-measure. Another measure of accuracy, the error score, attests that the number of erroneous pitch detections is reduced by more than 50 % w.r.t. the reference system and by 45 % w.r.t. PYIN.", "title": "" }, { "docid": "c0cbea5f38a04e0d123fc51af30d08c0", "text": "This brief presents a high-efficiency current-regulated charge pump for a white light-emitting diode driver. The charge pump incorporates no series current regulator, unlike conventional voltage charge pump circuits. Output current regulation is accomplished by the proposed pumping current control. The experimental system, with two 1-muF flying and load capacitors, delivers a regulated 20-mA current from an input supply voltage of 2.8-4.2 V. The measured variation is less than 0.6% at a pumping frequency of 200 kHz. The active area of the designed chip is 0.43 mm2 in a 0.5-mum CMOS process.", "title": "" }, { "docid": "334e97a1f50b5081ac08651c1d7ed943", "text": "Veterans of all war eras have a high rate of chronic disease, mental health disorders, and chronic multi-symptom illnesses (CMI).(1-3) Many veterans report symptoms that affect multiple biological systems as opposed to isolated disease states. Standard medical treatments often target isolated disease states such as headaches, insomnia, or back pain and at times may miss the more complex, multisystem dysfunction that has been documented in the veteran population. Research has shown that veterans have complex symptomatology involving physical, cognitive, psychological, and behavioral disturbances, such as difficult to diagnose pain patterns, irritable bowel syndrome, chronic fatigue, anxiety, depression, sleep disturbance, or neurocognitive dysfunction.(2-4) Meditation and acupuncture are each broad-spectrum treatments designed to target multiple biological systems simultaneously, and thus, may be well suited for these complex chronic illnesses. The emerging literature indicates that complementary and integrative medicine (CIM) approaches augment standard medical treatments to enhance positive outcomes for those with chronic disease, mental health disorders, and CMI.(5-12.)", "title": "" }, { "docid": "a6a98d0599c1339c1f2c6a6c7525b843", "text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.", "title": "" }, { "docid": "c9f2fd6bdcca5e55c5c895f65768e533", "text": "We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.", "title": "" }, { "docid": "160726aa34ba677292a2ae14666727e8", "text": "Child sex tourism is an obscure industry where the tourist‟s primary purpose is to engage in a sexual experience with a child. Under international legislation, tourism with the intent of having sexual relations with a minor is in violation of the UN Convention of the Rights of a Child. The intent and act is a crime and in violation of human rights. This paper examines child sex tourism in the Philippines, a major destination country for the purposes of child prostitution. The purpose is to bring attention to the atrocities that occur under the guise of tourism. It offers a definition of the crisis, a description of the victims and perpetrators, and a discussion of the social and cultural factors that perpetuate the problem. Research articles and reports from non-government organizations, advocacy groups, governments and educators were examined. Although definitional challenges did emerge, it was found that several of the articles and reports varied little in their definitions of child sex tourism and in the descriptions of the victims and perpetrators. A number of differences emerged that identified the social and cultural factors responsible for the creation and perpetuation of the problem.", "title": "" } ]
scidocsrr
dbd0d01702a50dcaab924ba4033ab378
An information theoretical approach to prefrontal executive function
[ { "docid": "5dde27787ee92c2e56729b25b9ca4311", "text": "The prefrontal cortex (PFC) subserves cognitive control: the ability to coordinate thoughts or actions in relation with internal goals. Its functional architecture, however, remains poorly understood. Using brain imaging in humans, we showed that the lateral PFC is organized as a cascade of executive processes from premotor to anterior PFC regions that control behavior according to stimuli, the present perceptual context, and the temporal episode in which stimuli occur, respectively. The results support an unified modular model of cognitive control that describes the overall functional organization of the human lateral PFC and has basic methodological and theoretical implications.", "title": "" } ]
[ { "docid": "594bbdf08b7c3d0a31b2b0f60e50bae3", "text": "This paper concerns the behavior of spatially extended dynamical systems —that is, systems with both temporal and spatial degrees of freedom. Such systems are common in physics, biology, and even social sciences such as economics. Despite their abundance, there is little understanding of the spatiotemporal evolution of these complex systems. ' Seemingly disconnected from this problem are two widely occurring phenomena whose very generality require some unifying underlying explanation. The first is a temporal effect known as 1/f noise or flicker noise; the second concerns the evolution of a spatial structure with scale-invariant, self-similar (fractal) properties. Here we report the discovery of a general organizing principle governing a class of dissipative coupled systems. Remarkably, the systems evolve naturally toward a critical state, with no intrinsic time or length scale. The emergence of the self-organized critical state provides a connection between nonlinear dynamics, the appearance of spatial self-similarity, and 1/f noise in a natural and robust way. A short account of some of these results has been published previously. The usual strategy in physics is to reduce a given problem to one or a few important degrees of freedom. The effect of coupling between the individual degrees of freedom is usually dealt with in a perturbative manner —or in a \"mean-field manner\" where the surroundings act on a given degree of freedom as an external field —thus again reducing the problem to a one-body one. In dynamics theory one sometimes finds that complicated systems reduce to a few collective degrees of freedom. This \"dimensional reduction'* has been termed \"selforganization, \" or the so-called \"slaving principle, \" and much insight into the behavior of dynamical systems has been achieved by studying the behavior of lowdimensional at tractors. On the other hand, it is well known that some dynamical systems act in a more concerted way, where the individual degrees of freedom keep each other in a more or less stab1e balance, which cannot be described as a \"perturbation\" of some decoupled state, nor in terms of a few collective degrees of freedom. For instance, ecological systems are organized such that the different species \"support\" each other in a way which cannot be understood by studying the individual constituents in isolation. The same interdependence of species also makes the ecosystem very susceptible to small changes or \"noise.\" However, the system cannot be too sensitive since then it could not have evolved into its present state in the first place. Owing to this balance we may say that such a system is \"critical. \" We shall see that this qualitative concept of criticality can be put on a firm quantitative basis. Such critical systems are abundant in nature. We shaB see that the dynamics of a critical state has a specific ternporal fingerprint, namely \"flicker noise, \" in which the power spectrum S(f) scales as 1/f at low frequencies. Flicker noise is characterized by correlations extended over a wide range of time scales, a clear indication of some sort of cooperative effect. Flicker noise has been observed, for example, in the light from quasars, the intensity of sunspots, the current through resistors, the sand flow in an hour glass, the flow of rivers such as the Nile, and even stock exchange price indices. ' All of these may be considered to be extended dynamical systems. Despite the ubiquity of flicker noise, its origin is not well understood. Indeed, one may say that because of its ubiquity, no proposed mechanism to data can lay claim as the single general underlying root of 1/f noise. We shall argue that flicker noise is in fact not noise but reflects the intrinsic dynamics of self-organized critical systems. Another signature of criticality is spatial selfsimilarity. It has been pointed out that nature is full of self-similar \"fractal\" structures, though the physical reason for this is not understood. \" Most notably, the whole universe is an extended dynamical system where a self-similar cosmic string structure has been claimed. Turbulence is a phenomenon where self-similarity is believed to occur in both space and time. Cooperative critical phenomena are well known in the context of phase transitions in equilibrium statistical mechanics. ' At the transition point, spatial selfsirnilarity occurs, and the dynamical response function has a characteristic power-law \"1/f\" behavior. (We use quotes because often flicker noise involves frequency spectra with dependence f ~ with P only roughly equal to 1.0.) Low-dimensional nonequilibrium dynamical systems also undergo phase transitions (bifurcations, mode locking, intermittency, etc.) where the properties of the attractors change. However, the critical point can be reached only by fine tuning a parameter (e.g. , temperature), and so may occur only accidentally in nature: It", "title": "" }, { "docid": "3fcce3664db5812689c121138e2af280", "text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.", "title": "" }, { "docid": "63c2662fdac3258587c5b1baa2133df9", "text": "Automatic design via Bayesian optimization holds great promise given the constant increase of available data across domains. However, it faces difficulties from high-dimensional, potentially discrete, search spaces. We propose to probabilistically embed inputs into a lower dimensional, continuous latent space, where we perform gradient-based optimization guided by a Gaussian process. Building on variational autoncoders, we use both labeled and unlabeled data to guide the encoding and increase its accuracy. In addition, we propose an adversarial extension to render the latent representation invariant with respect to specific design attributes, which allows us to transfer these attributes across structures. We apply the framework both to a functional-protein dataset and to perform optimization of drag coefficients directly over high-dimensional shapes without incorporating domain knowledge or handcrafted features.", "title": "" }, { "docid": "072b17732d8b628d3536e7045cd0047d", "text": "In this paper, we propose a high-speed parallel 128 bit multiplier for Ghash Function in conjunction with its FPGA implementation. Through the use of Verilog the designs are evaluated by using Xilinx Vertax5 with 65nm technic and 30,000 logic cells. The highest throughput of 30.764Gpbs can be achieved on virtex5 with the consumption of 8864 slices LUT. The proposed design of the multiplier can be utilized as a design IP core for the implementation of the Ghash Function. The architecture of the multiplier can also apply in more general polynomial basis. Moreover it can be used as arithmetic module in other encryption field.", "title": "" }, { "docid": "561b37c506657693d27fa65341faf51e", "text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.", "title": "" }, { "docid": "f8e3b21fd5481137a80063e04e9b5488", "text": "On the basis of the notion that the ability to exert self-control is critical to the regulation of aggressive behaviors, we suggest that mindfulness, an aspect of the self-control process, plays a key role in curbing workplace aggression. In particular, we note the conceptual and empirical distinctions between dimensions of mindfulness (i.e., mindful awareness and mindful acceptance) and investigate their respective abilities to regulate workplace aggression. In an experimental study (Study 1), a multiwave field study (Study 2a), and a daily diary study (Study 2b), we established that the awareness dimension, rather than the acceptance dimension, of mindfulness plays a more critical role in attenuating the association between hostility and aggression. In a second multiwave field study (Study 3), we found that mindful awareness moderates the association between hostility and aggression by reducing the extent to which individuals use dysfunctional emotion regulation strategies (i.e., surface acting), rather than by reducing the extent to which individuals engage in dysfunctional thought processes (i.e., rumination). The findings are discussed in terms of the implications of differentiating the dimensions and mechanisms of mindfulness for regulating workplace aggression. (PsycINFO Database Record", "title": "" }, { "docid": "4502ba935124c2daa9a49fc24ec5865b", "text": "Medical image processing is the most challenging and emerging field now a day’s. In this field, detection of brain tumor from MRI brain scan has become one of the most challenging problems, due to complex structure of brain. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. A computer aided diagnostic system has been proposed here for detecting the tumor texture in biological study. This is an attempt made which describes the proposed strategy for detection of tumor with the help of segmentation techniques in MATLAB; which incorporates preprocessing stages of noise removal, image enhancement and edge detection. Processing stages includes segmentation like intensity and watershed based segmentation, thresholding to extract the area of unwanted cells from the whole image. Here algorithms are proposed to calculate area and percentage of the tumor. Keywords— MRI, FCM, MKFCM, SVM, Otsu, threshold, fudge factor", "title": "" }, { "docid": "11c245ca7bc133155ff761374dfdea6e", "text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.", "title": "" }, { "docid": "05b1be7a90432eff4b62675826b77e09", "text": "People invest time, attention, and emotion while engaging in various activities in the real-world, for either purposes of awareness or participation. Social media platforms such as Twitter offer tremendous opportunities for people to become engaged in such real-world events through information sharing and communicating about these events. However, little is understood about the factors that affect people’s Twitter engagement in such real-world events. In this paper, we address this question by first operationalizing a person’s Twitter engagement in real-world events such as posting, retweeting, or replying to tweets about such events. Next, we construct statistical models that examine multiple predictive factors associated with four different perspectives of users’ Twitter engagement, and quantify their potential influence on predicting the (i) presence; and (ii) degree – of the user’s engagement with 643 real-world events. We also consider the effect of these factors with respect to a finer granularization of the different categories of events. We find that the measures of people’s prior Twitter activities, topical interests, geolocation, and social network structures are all variously correlated to their engagement with real-world events.", "title": "" }, { "docid": "d6f322f4dd7daa9525f778ead18c8b5e", "text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.", "title": "" }, { "docid": "8a1e94245d8fbdaf97402923d4dbc213", "text": "This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.", "title": "" }, { "docid": "840d4b26eec402038b9b3462fc0a98ac", "text": "A bench model of the new generation intelligent universal transformer (IUT) has been recently developed for distribution applications. The distribution IUT employs high-voltage semiconductor device technologies along with multilevel converter circuits for medium-voltage grid connection. This paper briefly describes the basic operation of the IUT and its experimental setup. Performances under source and load disturbances are characterized with extensive tests using a voltage sag generator and various linear and nonlinear loads. Experimental results demonstrate that IUT input and output can avoid direct impact from its opposite side disturbances. The output voltage is well regulated when the voltage sag is applied to the input. The input voltage and current maintains clean sinusoidal and unity power factor when output is nonlinear load. Under load transients, the input and output voltages remain well regulated. These key features prove that the power quality performance of IUT is far superior to that of conventional copper-and-iron based transformers", "title": "" }, { "docid": "e6dba9e9ad2db632caed6b19b9f5a010", "text": "Efficient and accurate similarity searching on a large time series data set is an important but non- trivial problem. In this work, we propose a new approach to improve the quality of similarity search on time series data by combining symbolic aggregate approximation (SAX) and piecewise linear approximation. The approach consists of three steps: transforming real valued time series sequences to symbolic strings via SAX, pattern matching on the symbolic strings and a post-processing via Piecewise Linear Approximation.", "title": "" }, { "docid": "d6cf367f29ed1c58fb8fd0b7edf69458", "text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.", "title": "" }, { "docid": "641d09ff15b731b679dbe3e9004c1578", "text": "In recent years, geological disposal of radioactive waste has focused on placement of highand intermediate-level wastes in mined underground caverns at depths of 500–800 m. Notwithstanding the billions of dollars spent to date on this approach, the difficulty of finding suitable sites and demonstrating to the public and regulators that a robust safety case can be developed has frustrated attempts to implement disposal programmes in several countries, and no disposal facility for spent nuclear fuel exists anywhere. The concept of deep borehole disposal was first considered in the 1950s, but was rejected as it was believed to be beyond existing drilling capabilities. Improvements in drilling and associated technologies and advances in sealing methods have prompted a re-examination of this option for the disposal of high-level radioactive wastes, including spent fuel and plutonium. Since the 1950s, studies of deep boreholes have involved minimal investment. However, deep borehole disposal offers a potentially safer, more secure, cost-effective and environmentally sound solution for the long-term management of high-level radioactive waste than mined repositories. Potentially it could accommodate most of the world’s spent fuel inventory. This paper discusses the concept, the status of existing supporting equipment and technologies and the challenges that remain.", "title": "" }, { "docid": "ab677299ffa1e6ae0f65daf5de75d66c", "text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.", "title": "" }, { "docid": "e7f91b90eab54dfd7f115a3a0225b673", "text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.", "title": "" }, { "docid": "684b9d64f4476a6b9dd3df1bd18bcb1d", "text": "We present the cases of three children with patent ductus arteriosus (PDA), pulmonary arterial hypertension (PAH), and desaturation. One of them had desaturation associated with atrial septal defect (ASD). His ASD, PAH, and desaturation improved after successful device closure of the PDA. The other two had desaturation associated with Down syndrome. One had desaturation only at room air oxygen (21% oxygen) but well saturated with 100% oxygen, subsequently underwent successful device closure of the PDA. The other had experienced desaturation at a younger age but spontaneously recovered when he was older, following attempted device closure of the PDA, with late embolization of the device.", "title": "" }, { "docid": "527e750a6047100cba1f78a3036acb9b", "text": "This paper presents a Generative Adversarial Network (GAN) to model multi-turn dialogue generation, which trains a latent hierarchical recurrent encoder-decoder simultaneously with a discriminative classifier that make the prior approximate to the posterior. Experiments show that our model achieves better results.", "title": "" }, { "docid": "27ddea786e06ffe20b4f526875cdd76b", "text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .", "title": "" } ]
scidocsrr
39ea6aeca6f9ce1124ba9e0bfd384686
Causal video object segmentation from persistence of occlusions
[ { "docid": "231554e78d509e7bca2dfd4280b411bb", "text": "Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.", "title": "" } ]
[ { "docid": "7b6cf139cae3e9dae8a2886ddabcfef0", "text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.", "title": "" }, { "docid": "07172e8a37f21b8c6629c0a30da63bd3", "text": "As one of the most influential social media platforms, microblogging is becoming increasingly popular in the last decades. Each day a large amount of events appear and spread in microblogging. The spreading of events and corresponding comments on them can greatly influence the public opinion. It is practical important to discover new emerging events in microblogging and predict their future popularity. Traditional event detection and information diffusion models cannot effectively handle our studied problem, because most existing methods focus only on event detection but ignore to predict their future trend. In this paper, we propose a new approach to detect burst novel events and predict their future popularity simultaneously. Specifically, we first detect events from online microblogging stream by utilizing multiple types of information, i.e., term frequency, and user's social relation. Meanwhile, the popularity of detected event is predicted through a proposed diffusion model which takes both the content and user information of the event into account. Extensive evaluations on two real-world datasets demonstrate the effectiveness of our approach on both event detection and their popularity", "title": "" }, { "docid": "a21513f9cf4d5a0e6445772941e9fba2", "text": "Superficial dorsal penile vein thrombosis was diagnosed 8 times in 7 patients between 19 and 40 years old (mean age 27 years). All patients related the onset of the thrombosis to vigorous sexual intercourse. No other etiological medications, drugs or constricting devices were implicated. Three patients were treated acutely with anti-inflammatory medications, while 4 were managed expectantly. The mean interval to resolution of symptoms was 7 weeks. Followup ranged from 3 to 30 months (mean 11) at which time all patients noticed normal erectile function. Only 1 patient had recurrent thrombosis 3 months after the initial episode, again related to intercourse. We conclude that this is a benign self-limited condition. Anti-inflammatory agents are useful for acute discomfort but they do not affect the rate of resolution.", "title": "" }, { "docid": "e913d5a0d898df3db28b97b27757b889", "text": "Speech-language pathologists tend to rely on the noninstrumental swallowing evaluation in making recommendations about a patient’s diet and management plan. The present study was designed to examine the sensitivity and specificity of the accuracy of using the chin-down posture during the clinical/bedside swallowing assessment. In 15 patients with acute stroke and clinically suspected oropharyngeal dysphagia, the correlation between clinical and videofluoroscopic findings was examined. Results identified that there is a difference in outcome prediction using the chin-down posture during the clinical/bedside assessment of swallowing compared to assessment by videofluoroscopy. Results are discussed relative to statistical and clinical perspectives, including site of lesion and factors to be considered in the design of an overall treatment plan for a patient with disordered swallowing.", "title": "" }, { "docid": "ac740402c3e733af4d690e34e567fabe", "text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.", "title": "" }, { "docid": "c56c71775a0c87f7bb6c59d6607e5280", "text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.", "title": "" }, { "docid": "00f31f21742a843ce6c4a00f3f6e6259", "text": "Recent developments in digital technologies bring about considerable business opportunities but also impose significant challenges on firms in all industries. While some industries, e.g., newspapers, have already profoundly reorganized the mechanisms of value creation, delivery, and capture during the course of digitalization (Karimi & Walter, 2015, 2016), many process-oriented and asset intensive industries have not yet fully evaluated and exploited the potential applications (Rigby, 2014). Although the process industries have successfully used advancements in technologies to optimize processes in the past (Kim et al., 2011), digitalization poses an unprecedented shift in technology that exceeds conventional technological evolution (Svahn et al., 2017). Driven by augmented processing power, connectivity of devices (IoT), advanced data analytics, and sensor technology, innovation activities in the process industries now break away from established innovation paths (Svahn et al., 2017; Tripsas, 2009). In contrast to prior innovations that were primarily bound to physical devices, new products are increasingly embedded into systems of value creation that span the physical and digital world (Parmar et al., 2014; Rigby, 2014; Yoo et al., 2010a). On this new playing field, firms and researchers are jointly interested in the organizational characteristics and capabilities that are required to gain a competitive advantage (e.g. Fink, 2011). Whereas prior studies cover the effect of digital transformation on innovation in various industries like newspaper (Karimi and Walter, 2015, 2016), automotive (Henfridsson and Yoo, 2014; Svahn et al., 2017), photography (Tripsas, 2009), and manufacturing (Jonsson et al., 2008), there is a relative dearth of studies that cover the impact of digital transformation in the process industries (Westergren and Holmström, 2012). The process industries are characterized by asset and research intensity, strong integration into physical locations, and often include value chains that are complex and feature aspects of rigidity (Lager Research Paper Digitalization in the process industries – Evidence from the German water industry", "title": "" }, { "docid": "c55e7c3825980d0be4546c7fadc812fe", "text": "Individual graphene oxide sheets subjected to chemical reduction were electrically characterized as a function of temperature and external electric fields. The fully reduced monolayers exhibited conductivities ranging between 0.05 and 2 S/cm and field effect mobilities of 2-200 cm2/Vs at room temperature. Temperature-dependent electrical measurements and Raman spectroscopic investigations suggest that charge transport occurs via variable range hopping between intact graphene islands with sizes on the order of several nanometers. Furthermore, the comparative study of multilayered sheets revealed that the conductivity of the undermost layer is reduced by a factor of more than 2 as a consequence of the interaction with the Si/SiO2 substrate.", "title": "" }, { "docid": "06f9780257311891f54c5d0c03e73c1a", "text": "This essay extends Simon's arguments in the Sciences of the Artificial to a critical examination of how theorizing in Information Technology disciplines should occur. The essay is framed around a number of fundamental questions that relate theorizing in the artificial sciences to the traditions of the philosophy of science. Theorizing in the artificial sciences is contrasted with theorizing in other branches of science and the applicability of the scientific method is questioned. The paper argues that theorizing should be considered in a holistic manner that links two modes of theorizing: an interior mode with the how of artifact construction studied and an exterior mode with the what of existing artifacts studied. Unlike some representations in the design science movement the paper argues that the study of artifacts once constructed can not be passed back uncritically to the methods of traditional science. Seven principles for creating knowledge in IT disciplines are derived: (i) artifact system centrality; (ii) artifact purposefulness; (iii) need for design theory; (iv) induction and abduction in theory building; (v) artifact construction as theory building; (vi) interior and exterior modes for theorizing; and (viii) issues with generality. The implicit claim is that consideration of these principles will improve knowledge creation and theorizing in design disciplines, for both design science researchers and also for researchers using more traditional methods. Further, attention to these principles should lead to the creation of more useful and relevant knowledge.", "title": "" }, { "docid": "298cee7d5283cae1debcaf40ce18eb42", "text": "Fluidic circuits made up of tiny chambers, conduits, and membranes can be fabricated in soft substrates to realize pressure-based sequential logic functions. Additional chambers in the same substrate covered with thin membranes can function as bubble-like tactile features. Sequential addressing of bubbles with fluidic logic enables just two external electronic valves to control of any number of tactile features by \"clocking in\" pressure states one at a time. But every additional actuator added to a shift register requires an additional clock pulse to address, so that the display refresh rate scales inversely with the number of actuators in an array. In this paper, we build a model of a fluidic logic circuit that can be used for sequential addressing of bubble actuators. The model takes the form of a hybrid automaton combining the discrete dynamics of valve switching and the continuous dynamics of compressible fluid flow through fluidic resistors (conduits) and capacitors (chambers). When parameters are set according to the results of system identification experiments on a physical prototype, pressure trajectories and propagation delays predicted by simulation of the hybrid automaton compare favorably to experiment. The propagation delay in turn determines the maximum clock rate and associated refresh rate for a refreshable braille display intended for rendering a full page of braille text or tactile graphics.", "title": "" }, { "docid": "34c3ba06f9bffddec7a08c8109c7f4b9", "text": "The role of e-learning technologies entirely depends on the acceptance and execution of required-change in the thinking and behaviour of the users of institutions. The research are constantly reporting that many e-learning projects are falling short of their objectives due to many reasons but on the top is the user resistance to change according to the digital requirements of new era. It is argued that the suitable way for change management in e-learning environment is the training and persuading of users with a view to enhance their digital literacy and thus gradually changing the users’ attitude in positive direction. This paper discusses change management in transition to e-learning system considering pedagogical, cost and technical implications. It also discusses challenges and opportunities for integrating these technologies in higher learning institutions with examples from Turkey GATA (Gülhane Askeri Tıp Akademisi-Gülhane Military Medical Academy).", "title": "" }, { "docid": "9a79a9b2c351873143a8209d37b46f64", "text": "The authors review research on police effectiveness in reducing crime, disorder, and fear in the context of a typology of innovation in police practices. That typology emphasizes two dimensions: one concerning the diversity of approaches, and the other, the level of focus. The authors find that little evidence supports the standard model of policing—low on both of these dimensions. In contrast, research evidence does support continued investment in police innovations that call for greater focus and tailoring of police efforts, combined with an expansion of the tool box of policing beyond simple law enforcement. The strongest evidence of police effectiveness in reducing crime and disorder is found in the case of geographically focused police practices, such as hot-spots policing. Community policing practices are found to reduce fear of crime, but the authors do not find consistent evidence that community policing (when it is implemented without models of problem-oriented policing) affects either crime or disorder. A developing body of evidence points to the effectiveness of problemoriented policing in reducing crime, disorder, and fear. More generally, the authors find that many policing practices applied broadly throughout the United States either have not been the subject of systematic research or have been examined in the context of research designs that do not allow practitioners or policy makers to draw very strong conclusions.", "title": "" }, { "docid": "89552cbc1d432bdbf26b4213b6fc80cc", "text": "Tuberculosis, also called TB, is currently a major health hazard due to multidrug-resistant forms of bacilli. Global efforts are underway to eradicate TB using new drugs with new modes of action, higher activity, and fewer side effects in combination with vaccines. For this reason, unexplored new sources and previously explored sources were examined and around 353 antimycobacterial compounds (Nat Prod Rep 2007; 24: 278-297) 7 have been previously reported. To develop drugs from these new sources, additional work is required for preclinical and clinical results. Since ancient times, different plant part extracts have been used as traditional medicines against diseases including tuberculosis. This knowledge may be useful in developing future powerful drugs. Plant natural products are again becoming important in this regard. In this review, we report 127 antimycobacterial compounds and their antimycobacterial activities. Of these, 27 compounds had a minimum inhibitory concentration of < 10 µg/mL. In some cases, the mechanism of activity has been determined. We hope that some of these compounds may eventually develop into effective new drugs against tuberculosis.", "title": "" }, { "docid": "049674034f41b359a7db7b3c5ba7c541", "text": "This paper extends and contributes to emerging debates on the validation of interpretive research (IR) in management accounting. We argue that IR has the potential to produce not only subjectivist, emic understandings of actors’ meanings, but also explanations, characterised by a certain degree of ‘‘thickness”. Mobilising the key tenets of the modern philosophical theory of explanation and the notion of abduction, grounded in pragmatist epistemology, we explicate how explanations may be developed and validated, yet remaining true to the core premises of IR. We focus on the intricate relationship between two arguably central aspects of validation in IR, namely authenticity and plausibility. Working on the assumption that validation is an important, but potentially problematic concern in all serious scholarly research, we explore whether and how validation efforts are manifest in IR using two case studies as illustrative examples. Validation is seen as an issue of convincing readers of the authenticity of research findings whilst simultaneously ensuring that explanations are deemed plausible. Whilst the former is largely a matter of preserving the emic qualities of research accounts, the latter is intimately linked to the process of abductive reasoning, whereby different theories are applied to advance thick explanations. This underscores the view of validation as a process, not easily separated from the ongoing efforts of researchers to develop explanations as research projects unfold and far from reducible to mere technicalities of following pre-specified criteria presumably minimising various biases. These properties detract from a view of validation as conforming to prespecified, stable, and uniform criteria and allow IR to move beyond the ‘‘crisis of validity” arguably prevailing in the social sciences. 2009 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "47d6d85b9b902d7078c6daf9402f4b4c", "text": "Doxorubicin (DOX) is a very effective anticancer agent. However, in its pure form, its application is limited by significant cardiotoxic side effects. The purpose of this study was to develop a controllably activatable chemotherapy prodrug of DOX created by blocking its free amine group with a biotinylated photocleavable blocking group (PCB). An n-hydroxy succunamide protecting group on the PCB allowed selective binding at the DOX active amine group. The PCB included an ortho-nitrophenyl group for photo cleavability and a water-soluble glycol spacer arm ending in a biotin group for enhanced membrane interaction. This novel DOX-PCB prodrug had a 200-fold decrease in cytotoxicity compared to free DOX and could release active DOX upon exposure to UV light at 350 nm. Unlike DOX, DOX-PCB stayed in the cell cytoplasm, did not enter the nucleus, and did not stain the exposed DNA during mitosis. Human liver microsome incubation with DOX-PCB indicated stability against liver metabolic breakdown. The development of the DOX-PCB prodrug demonstrates the possibility of using light as a method of prodrug activation in deep internal tissues without relying on inherent physical or biochemical differences between the tumor and healthy tissue for use as the trigger.", "title": "" }, { "docid": "562cf2d0bc59f0fde4d7377f1d5058a2", "text": "The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.", "title": "" }, { "docid": "0c8b192807a6728be21e6a19902393c0", "text": "The balance between facilitation and competition is likely to change with age due to the dynamic nature of nutrient, water and carbon cycles, and light availability during stand development. These processes have received attention in harsh, arid, semiarid and alpine ecosystems but are rarely examined in more productive communities, in mixed-species forest ecosystems or in long-term experiments spanning more than a decade. The aim of this study was to examine how inter- and intraspecific interactions between Eucalyptus globulus Labill. mixed with Acacia mearnsii de Wildeman trees changed with age and productivity in a field experiment in temperate south-eastern Australia. Spatially explicit neighbourhood indices were calculated to quantify tree interactions and used to develop growth models to examine how the tree interactions changed with time and stand productivity. Interspecific influences were usually less negative than intraspecific influences, and their difference increased with time for E. globulus and decreased with time for A. mearnsii. As a result, the growth advantages of being in a mixture increased with time for E. globulus and decreased with time for A. mearnsii. The growth advantage of being in a mixture also decreased for E. globulus with increasing stand productivity, showing that spatial as well as temporal dynamics in resource availability influenced the magnitude and direction of plant interactions.", "title": "" }, { "docid": "a8b7d6b3a43d39c8200e7787c3d58a0e", "text": "Being Scrum the agile software development framework most commonly used in the software industry, its applicability is attracting great attention to the academia. That is why this topic is quite often included in computer science and related university programs. In this article, we present a course design of a Software Engineering course where an educational framework and an open-source agile project management tool were used to develop real-life projects by undergraduate students. During the course, continuous guidance was given by the teaching staff to facilitate the students' learning of Scrum. Results indicate that students find it easy to use the open-source tool and helpful to apply Scrum to a real-life project. However, the unavailability of the client and conflicts among the team members have negative impact on the realization of projects. The guidance given to students along the course helped identify five common issues faced by students through the learning process.", "title": "" }, { "docid": "f02bd91e8374506aa4f8a2107f9545e6", "text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "41ceb618f20b82eaa65588045b609dcb", "text": "In decision making under uncertainty there are two main questions that need to be evaluated: i) What are the future consequences and associated uncertainties of an action, and ii) what is a good (or right) decision or action. Philosophically these issues are categorised as epistemic questions (i.e. questions of knowledge) and ethical questions (i.e. questions of moral and norms). This paper discusses the second issue, and evaluates different bases for a good decision, using different ethical theories as a starting point. This includes the utilitarian ethics of Bentley and Mills, and deontological ethics of Kant, Rawls and Habermas. The paper addresses various principles in risk management and risk related decision making, including cost benefit analysis, minimum safety criterion, the ALARP principle and the precautionary principle.", "title": "" } ]
scidocsrr
b0bf55e123a1d0efe1fd44d5b3c1e4e9
Oruta: Privacy-Preserving Public Auditing for Shared Data in the Cloud
[ { "docid": "70cc8c058105b905eebdf941ca2d3f2e", "text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.", "title": "" } ]
[ { "docid": "8f78f2efdd2fecaf32fbc7f5ffa79218", "text": "Evolutionary population dynamics (EPD) deal with the removal of poor individuals in nature. It has been proven that this operator is able to improve the median fitness of the whole population, a very effective and cheap method for improving the performance of meta-heuristics. This paper proposes the use of EPD in the grey wolf optimizer (GWO). In fact, EPD removes the poor search agents of GWO and repositions them around alpha, beta, or delta wolves to enhance exploitation. The GWO is also required to randomly reinitialize its worst search agents around the search space by EPD to promote exploration. The proposed GWO–EPD algorithm is benchmarked on six unimodal and seven multi-modal test functions. The results are compared to the original GWO algorithm for verification. It is demonstrated that the proposed operator is able to significantly improve the performance of the GWO algorithm in terms of exploration, local optima avoidance, exploitation, local search, and convergence rate.", "title": "" }, { "docid": "8905bd760b0c72fbfe4fbabd778ff408", "text": "Boredom and low levels of task engagement while driving can pose road safety risks, e.g., inattention during low traffic, routine trips, or semi-automated driving. Digital technology interventions that increase task engagement, e.g., through performance feedback, increased challenge, and incentives (often referred to as ‘gamification’), could therefore offer safety benefits. To explore the impact of such interventions, we conducted experiments in a highfidelity driving simulator with thirty-two participants. In two counterbalanced conditions (control and intervention), we compared driving behaviour, physiological arousal, and subjective experience. Results indicate that the gamified boredom intervention reduced unsafe coping mechanisms such as speeding while promoting anticipatory driving. We can further infer that the intervention not only increased one’s attention and arousal during the intermittent gamification challenges, but that these intermittent stimuli may also help sustain one’s attention and arousal in between challenges and throughout a drive. At the same time, the gamified condition led to slower hazard reactions and short off-road glances. Our contributions deepen our understanding of driver boredom and pave the way for engaging interventions for safety critical tasks.", "title": "" }, { "docid": "d5d96493b34cfbdf135776e930ec5979", "text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.", "title": "" }, { "docid": "90c3543eca7a689188725e610e106ce9", "text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.", "title": "" }, { "docid": "e49f9ad79d3d4d31003c0cda7d7d49c5", "text": "Greater trochanter pain syndrome due to tendinopathy or bursitis is a common cause of hip pain. The previously reported magnetic resonance (MR) findings of trochanteric tendinopathy and bursitis are peritrochanteric fluid and abductor tendon abnormality. We have often noted peritrochanteric high T2 signal in patients without trochanteric symptoms. The purpose of this study was to determine whether the MR findings of peritrochanteric fluid or hip abductor tendon pathology correlate with trochanteric pain. We retrospectively reviewed 131 consecutive MR examinations of the pelvis (256 hips) for T2 peritrochanteric signal and abductor tendon abnormalities without knowledge of the clinical symptoms. Any T2 peritrochanteric abnormality was characterized by size as tiny, small, medium, or large; by morphology as feathery, crescentic, or round; and by location as bursal or intratendinous. The clinical symptoms of hip pain and trochanteric pain were compared to the MR findings on coronal, sagittal, and axial T2 sequences using chi-square or Fisher’s exact test with significance assigned as p < 0.05. Clinical symptoms of trochanteric pain syndrome were present in only 16 of the 256 hips. All 16 hips with trochanteric pain and 212 (88%) of 240 without trochanteric pain had peritrochanteric abnormalities (p = 0.15). Eighty-eight percent of hips with trochanteric symptoms had gluteus tendinopathy while 50% of those without symptoms had such findings (p = 0.004). Other than tendinopathy, there was no statistically significant difference between hips with or without trochanteric symptoms and the presence of peritrochanteric T2 abnormality, its size or shape, and the presence of gluteus medius or minimus partial thickness tears. Patients with trochanteric pain syndrome always have peritrochanteric T2 abnormalities and are significantly more likely to have abductor tendinopathy on magnetic resonance imaging (MRI). However, although the absence of peritrochanteric T2 MR abnormalities makes trochanteric pain syndrome unlikely, detection of these abnormalities on MRI is a poor predictor of trochanteric pain syndrome as these findings are present in a high percentage of patients without trochanteric pain.", "title": "" }, { "docid": "8aa305f217314d60ed6c9f66d20a7abf", "text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.", "title": "" }, { "docid": "9164dab8c4c55882f8caecc587c32eb1", "text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).", "title": "" }, { "docid": "052a83669b39822eda51f2e7222074b4", "text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.", "title": "" }, { "docid": "0bcff493580d763dbc1dd85421546201", "text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.", "title": "" }, { "docid": "a0d34b1c003b7e88c2871deaaba761ed", "text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1", "title": "" }, { "docid": "7e78dd27dd2d4da997ceef7e867b7cd2", "text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.", "title": "" }, { "docid": "be29160b73b9ab727eb760a108a7254a", "text": "Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.", "title": "" }, { "docid": "136ed8dc00926ceec6d67b9ab35e8444", "text": "This paper addresses the property requirements of repair materials for high durability performance for concrete structure repair. It is proposed that the high tensile strain capacity of High Performance Fiber Reinforced Cementitious Composites (HPFRCC) makes such materials particularly suitable for repair applications, provided that the fresh properties are also adaptable to those required in placement techniques in typical repair applications. A specific version of HPFRCC, known as Engineered Cementitious Composites (ECC), is described. It is demonstrated that the fresh and hardened properties of ECC meet many of the requirements for durable repair performance. Recent experience in the use of this material in a bridge deck patch repair is highlighted. The origin of this article is a summary of a keynote lecture with the same title given at the Conference on Fiber Composites, High-Performance Concretes and Smart Materials, Chennai, India, Jan., 2004. It is only slightly updated here.", "title": "" }, { "docid": "d7eb92756c8c3fb0ab49d7b101d96343", "text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.", "title": "" }, { "docid": "0bf150f6cd566c31ec840a57d8d2fa55", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "ef4272cd4b0d4df9aa968cc9ff528c1e", "text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.", "title": "" }, { "docid": "d8befc5eb47ac995e245cf9177c16d3d", "text": "Our hypothesis is that the video game industry, in the attempt to simulate a realistic experience, has inadvertently collected very accurate data which can be used to solve problems in the real world. In this paper we describe a novel approach to soccer match prediction that makes use of only virtual data collected from a video game(FIFA 2015). Our results were comparable and in some places better than results achieved by predictors that used real data. We also use the data provided for each player and the players present in the squad, to analyze the team strategy. Based on our analysis, we were able to suggest better strategies for weak teams", "title": "" }, { "docid": "eba545eb04c950ecd9462558c9d3da85", "text": "The ability to recognize facial expressions automatically enables novel applications in human-computer interaction and other areas. Consequently, there has been active research in this field, with several recent works utilizing Convolutional Neural Networks (CNNs) for feature extraction and inference. These works differ significantly in terms of CNN architectures and other factors. Based on the reported results alone, the performance impact of these factors is unclear. In this paper, we review the state of the art in image-based facial expression recognition using CNNs and highlight algorithmic differences and their performance impact. On this basis, we identify existing bottlenecks and consequently directions for advancing this research field. Furthermore, we demonstrate that overcoming one of these bottlenecks – the comparatively basic architectures of the CNNs utilized in this field – leads to a substantial performance increase. By forming an ensemble of modern deep CNNs, we obtain a FER2013 test accuracy of 75.2%, outperforming previous works without requiring auxiliary training data or face registration.", "title": "" }, { "docid": "a31692667282fe92f2eefc63cd562c9e", "text": "Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.", "title": "" } ]
scidocsrr
7b2ed986ed98f67cdc3456f543a73f54
In-DBMS Sampling-based Sub-trajectory Clustering
[ { "docid": "03aba9a44f1ee13cc7f16aadbebb7165", "text": "The increasing pervasiveness of location-acquisition technologies has enabled collection of huge amount of trajectories for almost any kind of moving objects. Discovering useful patterns from their movement behaviors can convey valuable knowledge to a variety of critical applications. In this light, we propose a novel concept, called gathering, which is a trajectory pattern modeling various group incidents such as celebrations, parades, protests, traffic jams and so on. A key observation is that these incidents typically involve large congregations of individuals, which form durable and stable areas with high density. In this work, we first develop a set of novel techniques to tackle the challenge of efficient discovery of gathering patterns on archived trajectory dataset. Afterwards, since trajectory databases are inherently dynamic in many real-world scenarios such as traffic monitoring, fleet management and battlefield surveillance, we further propose an online discovery solution by applying a series of optimization schemes, which can keep track of gathering patterns while new trajectory data arrive. Finally, the effectiveness of the proposed concepts and the efficiency of the approaches are validated by extensive experiments based on a real taxicab trajectory dataset.", "title": "" } ]
[ { "docid": "2089f931cf6fca595898959cbfbca28a", "text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.", "title": "" }, { "docid": "c551e19208e367cc5546a3d46f7534c8", "text": "We propose a novel approach for solving the approximate nearest neighbor search problem in arbitrary metric spaces. The distinctive feature of our approach is that we can incrementally build a non-hierarchical distributed structure for given metric space data with a logarithmic complexity scaling on the size of the structure and adjustable accuracy probabilistic nearest neighbor queries. The structure is based on a small world graph with vertices corresponding to the stored elements, edges for links between them and the greedy algorithm as base algorithm for searching. Both search and addition algorithms require only local information from the structure. The performed simulation for data in the Euclidian space shows that the structure built using the proposed algorithm has navigable small world properties with logarithmic search complexity at fixed accuracy and has weak (power law) scalability with the dimensionality of the stored data.", "title": "" }, { "docid": "880aa3de3b839739927cbd82b7abcf8a", "text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.", "title": "" }, { "docid": "9441113599194d172b6f618058b2ba88", "text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.", "title": "" }, { "docid": "997a1ec16394a20b3a7f2889a583b09d", "text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.", "title": "" }, { "docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21", "text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.", "title": "" }, { "docid": "1583d8c41b15fb77787deef955ace886", "text": "The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is im- portant that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e., to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera- based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human 'ground-truth' maneuvers were then recorded, to yield the 'failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.", "title": "" }, { "docid": "f81059b5ff3d621dfa9babc8e68bc0ab", "text": "A zero voltage switching (ZVS) isolated Sepic converter with active clamp topology is presented. The buck-boost type of active clamp is connected in parallel with the primary side of the transformer to absorb all the energy stored in the transformer leakage inductance and to limit the peak voltage on the switching device. During the transition interval between the main and auxiliary switches, the resonance based on the output capacitor of switch and the transformer leakage inductor can achieve ZVS for both switches. The operational principle, steady state analysis and design consideration of the proposed converter are presented. Finally, the proposed converter is verified by the experimental results based on an 180 W prototype circuit.", "title": "" }, { "docid": "c57c69fd1858b50998ec9706e34f6c46", "text": "Hashing has recently attracted considerable attention for large scale similarity search. However, learning compact codes with good performance is still a challenge. In many cases, the real-world data lies on a low-dimensional manifold embedded in high-dimensional ambient space. To capture meaningful neighbors, a compact hashing representation should be able to uncover the intrinsic geometric structure of the manifold, e.g., the neighborhood relationships between subregions. Most existing hashing methods only consider this issue during mapping data points into certain projected dimensions. When getting the binary codes, they either directly quantize the projected values with a threshold, or use an orthogonal matrix to refine the initial projection matrix, which both consider projection and quantization separately, and will not well preserve the locality structure in the whole learning process. In this paper, we propose a novel hashing algorithm called Locality Preserving Hashing to effectively solve the above problems. Specifically, we learn a set of locality preserving projections with a joint optimization framework, which minimizes the average projection distance and quantization loss simultaneously. Experimental comparisons with other state-of-the-art methods on two large scale datasets demonstrate the effectiveness and efficiency of our method.", "title": "" }, { "docid": "fd32f2117ae01049314a0c1cfb565724", "text": "Smart phones, tablets, and the rise of the Internet of Things are driving an insatiable demand for wireless capacity. This demand requires networking and Internet infrastructures to evolve to meet the needs of current and future multimedia applications. Wireless HetNets will play an important role toward the goal of using a diverse spectrum to provide high quality-of-service, especially in indoor environments where most data are consumed. An additional tier in the wireless HetNets concept is envisioned using indoor gigabit small-cells to offer additional wireless capacity where it is needed the most. The use of light as a new mobile access medium is considered promising. In this article, we describe the general characteristics of WiFi and VLC (or LiFi) and demonstrate a practical framework for both technologies to coexist. We explore the existing research activity in this area and articulate current and future research challenges based on our experience in building a proof-of-concept prototype VLC HetNet.", "title": "" }, { "docid": "638c9e4ba1c3d35fdb766c17b188529d", "text": "Association football is a popular sport, but it is also a big business. From a managerial perspective, the most important decisions that team managers make concern player transfers, so issues related to player valuation, especially the determination of transfer fees and market values, are of major concern. Market values can be understood as estimates of transfer fees—that is, prices that could be paid for a player on the football market—so they play an important role in transfer negotiations. These values have traditionally been estimated by football experts, but crowdsourcing has emerged as an increasingly popular approach to estimating market value. While researchers have found high correlations between crowdsourced market values and actual transfer fees, the process behind crowd judgments is not transparent, crowd estimates are not replicable, and they are updated infrequently because they require the participation of many users. Data analytics may thus provide a sound alternative or a complementary approach to crowd-based estimations of market value. Based on a unique data set that is comprised of 4217 players from the top five European leagues and a period of six playing seasons, we estimate players’ market values using multilevel regression analysis. The regression results suggest that data-driven estimates of market value can overcome several of the crowd’s practical limitations while producing comparably accurate numbers. Our results have important implications for football managers and scouts, as data analytics facilitates precise, objective, and reliable estimates of market value that can be updated at any time. © 2017 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )", "title": "" }, { "docid": "5dda89fbe7f5757588b5dff0e6c2565d", "text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female Ž gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight Ž gures to be more attractive than normal or overweight Ž gures, regardless of WHR. The female Ž gure with the high WHR (0.86) was judged to be more attractive than the Ž gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These Žndings lend stronger support to sociocultural rather than evolutionary hypotheses.", "title": "" }, { "docid": "a492dcdbb9ec095cdfdab797c4b4e659", "text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.", "title": "" }, { "docid": "813b4607e9675ad4811ba181a912bbe9", "text": "The end-Permian mass extinction was the most severe biodiversity crisis in Earth history. To better constrain the timing, and ultimately the causes of this event, we collected a suite of geochronologic, isotopic, and biostratigraphic data on several well-preserved sedimentary sections in South China. High-precision U-Pb dating reveals that the extinction peak occurred just before 252.28 ± 0.08 million years ago, after a decline of 2 per mil (‰) in δ(13)C over 90,000 years, and coincided with a δ(13)C excursion of -5‰ that is estimated to have lasted ≤20,000 years. The extinction interval was less than 200,000 years and synchronous in marine and terrestrial realms; associated charcoal-rich and soot-bearing layers indicate widespread wildfires on land. A massive release of thermogenic carbon dioxide and/or methane may have caused the catastrophic extinction.", "title": "" }, { "docid": "fe94febc520eab11318b49391d46476b", "text": "BACKGROUND\nDiabetes is a chronic disease, with high prevalence across many nations, which is characterized by elevated level of blood glucose and risk of acute and chronic complication. The Kingdom of Saudi Arabia (KSA) has one of the highest levels of diabetes prevalence globally. It is well-known that the treatment of diabetes is complex process and requires both lifestyle change and clear pharmacologic treatment plan. To avoid the complication from diabetes, the effective behavioural change and extensive education and self-management is one of the key approaches to alleviate such complications. However, this process is lengthy and expensive. The recent studies on the user of smart phone technologies for diabetes self-management have proven to be an effective tool in controlling hemoglobin (HbA1c) levels especially in type-2 diabetic (T2D) patients. However, to date no reported study addressed the effectiveness of this approach in the in Saudi patients. This study investigates the impact of using mobile health technologies for the self-management of diabetes in Saudi Arabia.\n\n\nMETHODS\nIn this study, an intelligent mobile diabetes management system (SAED), tailored for T2D patients in KSA was developed. A pilot study of the SAED system was conducted in Saudi Arabia with 20 diabetic patients for 6 months duration. The patients were randomly categorized into a control group who did not use the SAED system and an intervention group whom used the SAED system for their diabetes management during this period. At the end of the follow-up period, the HbA1c levels in the patients in both groups were measure together with a diabetes knowledge test was also conducted to test the diabetes awareness of the patients.\n\n\nRESULTS\nThe results of SAED pilot study showed that the patients in the intervention group were able to significantly decrease their HbA1c levels compared to the control group. The SAED system also enhanced the diabetes awareness amongst the patients in the intervention group during the trial period. These outcomes confirm the global studies on the effectiveness of smart phone technologies in diabetes management. The significance of the study is that this was one of the first such studies conducted on Saudi patients and of their acceptance for such technology in their diabetes self-management treatment plans.\n\n\nCONCLUSIONS\nThe pilot study of the SAED system showed that a mobile health technology can significantly improve the HbA1C levels among Saudi diabetic and improve their disease management plans. The SAED system can also be an effective and low-cost solution in improving the quality of life of diabetic patients in the Kingdom considering the high level of prevalence and the increasing economic burden of this disease.", "title": "" }, { "docid": "98d40e5a6df5b6a3ab39a04bf04c6a65", "text": "T Internet has increased the flexibility of retailers, allowing them to operate an online arm in addition to their physical stores. The online channel offers potential benefits in selling to customer segments that value the convenience of online shopping, but it also raises new challenges. These include the higher likelihood of costly product returns when customers’ ability to “touch and feel” products is important in determining fit. We study competing retailers that can operate dual channels (“bricks and clicks”) and examine how pricing strategies and physical store assistance levels change as a result of the additional Internet outlet. A central result we obtain is that when differentiation among competing retailers is not too high, having an online channel can actually increase investment in store assistance levels (e.g., greater shelf display, more-qualified sales staff, floor samples) and decrease profits. Consequently, when the decision to open an Internet channel is endogenized, there can exist an asymmetric equilibrium where only one retailer elects to operate an online arm but earns lower profits than its bricks-only rival. We also characterize equilibria where firms open an online channel, even though consumers only use it for research and learning purposes but buy in stores. A number of extensions are discussed, including retail settings where firms carry multiple product categories, shipping and handling costs, and the role of store assistance in impacting consumer perceived benefits.", "title": "" }, { "docid": "ecd7fca4f2ea0207582755a2b9733419", "text": "This work introduces a novel framework for quantifying the presence and strength of recurrent dynamics in video data. Specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in a way which does not require segmentation, training, object tracking or 1-dimensional surrogate signals. Our methodology operates directly on video data. The approach combines ideas from nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem of determining the circularity or toroidality of an associated geometric space. Through extensive testing, we show the robustness of our scores with respect to several noise models/levels; we show that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.", "title": "" }, { "docid": "2a89fb135d7c53bda9b1e3b8598663a5", "text": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "title": "" }, { "docid": "850a7daa56011e6c53b5f2f3e33d4c49", "text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.", "title": "" }, { "docid": "dc54b73eb740bc1bbdf1b834a7c40127", "text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.", "title": "" } ]
scidocsrr
45152911817d270e1896874a457c297a
Type-Aware Distantly Supervised Relation Extraction with Linked Arguments
[ { "docid": "afd00b4795637599f357a7018732922c", "text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.", "title": "" }, { "docid": "79ad9125b851b6d2c3ed6fb1c5cf48e1", "text": "In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29% on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.", "title": "" }, { "docid": "c4a925ced6eb9bea9db96136905c3e19", "text": "Knowledge of objects and their parts, meronym relations, are at the heart of many question-answering systems, but manually encoding these facts is impractical. Past researchers have tried hand-written patterns, supervised learning, and bootstrapped methods, but achieving both high precision and recall has proven elusive. This paper reports on a thorough exploration of distant supervision to learn a meronym extractor for the domain of college biology. We introduce a novel algorithm, generalizing the ``at least one'' assumption of multi-instance learning to handle the case where a fixed (but unknown) percentage of bag members are positive examples. Detailed experiments compare strategies for mention detection, negative example generation, leveraging out-of-domain meronyms, and evaluate the benefit of our multi-instance percentage model.", "title": "" }, { "docid": "44582f087f9bb39d6e542ff7b600d1c7", "text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.", "title": "" }, { "docid": "9c44aba7a9802f1fe95fbeb712c23759", "text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.", "title": "" }, { "docid": "904db9e8b0deb5027d67bffbd345b05f", "text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.", "title": "" } ]
[ { "docid": "5e14acfc68e8cb1ae7ea9b34eba420e0", "text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie", "title": "" }, { "docid": "5f94ad6047ec9cf565b9960e89bbc913", "text": "In this paper, we compare the geometrical performance between the rigorous sensor model (RSM) and rational function model (RFM) in the sensor modeling of FORMOSAT-2 satellite images. For the RSM, we provide a least squares collocation procedure to determine the precise orbits. As for the RFM, we analyze the model errors when a large amount of quasi-control points, which are derived from the satellite ephemeris and attitude data, are employed. The model errors with respect to the length of the image strip are also demonstrated. Experimental results show that the RFM is well behaved, indicating that its positioning errors is similar to that of the RSM. Introduction Sensor orientation modeling is a prerequisite for the georeferencing of satellite images or 3D object reconstruction from satellite stereopairs. Nowadays, most of the high-resolution satellites use linear array pushbroom scanners. Based on the pushbroom scanning geometry, a number of investigations have been reported regarding the geometric accuracy of linear array images (Westin, 1990; Chen and Lee, 1993; Li, 1998; Tao et al., 2000; Toutin, 2003; Grodecki and Dial, 2003). The geometric modeling of the sensor orientation may be divided into two categories, namely, the rigorous sensor model (RSM) and the rational function model (RFM) (Toutin, 2004). Capable of fully delineating the imaging geometry between the image space and object space, the RSM has been recognized in providing the most precise geometrical processing of satellite images. Based on the collinearity condition, an image point corresponds to a ground point using the employment of the orientation parameters, which are expressed as a function of the sampling time. Due to the dynamic sampling, the RSM contains many mathematical calculations, which can cause problems for researchers who are not familiar with the data preprocessing. Moreover, with the increasing number of Earth resource satellites, researchers need to familiarize themselves with the uniqueness and complexity of each sensor model. Therefore, a generic sensor model of the geometrical processing is needed for simplification. (Dowman and Michalis, 2003). The RFM is a generalized sensor model that is used as an alternative for the RSM. The model uses a pair of ratios of two polynomials to approximate the collinearity condition equations. The RFM has been successfully applied to several high-resolution satellite images such as Ikonos (Di et al., 2003; Grodecki and Dial, 2003; Fraser and Hanley, 2003) and QuickBird (Robertson, 2003). Due to its simple impleThe Geometrical Comparisons of RSM and RFM for FORMOSAT-2 Satellite Images Liang-Chien Chen, Tee-Ann Teo, and Chien-Liang Liu mentation and standardization (NIMA, 2000), the approach has been widely used in the remote sensing community. Launched on 20 May 2004, FORMOSAT-2 is operated by the National Space Organization of Taiwan. The satellite operates in a sun-synchronous orbit at an altitude of 891 km and with an inclination of 99.1 degrees. It has a swath width of 24 km and orbits the Earth exactly 14 times per day, which makes daily revisits possible (NSPO, 2005). Its panchromatic images have a resolution of 2 meters, while the multispectral sensor produces 8 meter resolution images covering the blue, green, red, and NIR bands. Its high performance provides an excellent data resource for the remote sensing researchers. The major objective of this investigation is to compare the geometrical performances between the RSM and RFM when FORMOSAT-2 images are employed. A least squares collocation-based RSM will also be proposed in the paper. In the reconstruction of the RFM, rational polynomial coefficients are generated by using the on-board ephemeris and attitude data. In addition to the comparison of the two models, the modeling error of the RFM is analyzed when long image strips are used. Rigorous Sensor Models The proposed method comprises essentially of two parts. The first involves the development of the mathematical model for time-dependent orientations. The second performs the least squares collocation to compensate the local systematic errors. Orbit Fitting There are two types of sensor models for pushbroom satellite images, i.e., orbital elements (Westin, 1990) and state vectors (Chen and Chang, 1998). The orbital elements use the Kepler elements as the orbital parameters, while the state vectors calculate the orbital parameters directly by using the position vector. Although both sensor models are robust, the state vector model provides simpler mathematical calculations. For this reason, we select the state vector approach in this investigation. Three steps are included in the orbit modeling: (a) Initialization of the orientation parameters using on-board ephemeris data; (b) Compensation of the systematic errors of the orbital parameters and attitude data via ground control points (GCPs); and (c) Modification of the orbital parameters by using the Least Squares Collocation (Mikhail and Ackermann, 1982) technique. PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING May 2006 573 Center for Space and Remote Sensing Research National Central University, Chung-Li, Taiwan ([email protected]). Photogrammetric Engineering & Remote Sensing Vol. 72, No. 5, May 2006, pp. 573–579. 0099-1112/06/7205–0573/$3.00/0 © 2006 American Society for Photogrammetry and Remote Sensing HR-05-016.qxd 4/10/06 2:55 PM Page 573", "title": "" }, { "docid": "945b2067076bd47485b39c33fb062ec1", "text": "Computation of floating-point transcendental functions has a relevant importance in a wide variety of scientific applications, where the area cost, error and latency are important requirements to be attended. This paper describes a flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm. The novelty of the proposed architecture is that by sharing the same resources the CORDIC algorithm can be used in two operation modes, allowing it to compute the sine, cosine or arctangent functions. Additionally, in case of the exponential function, the architectures change automatically between the CORDIC or a Taylor approach, which helps to improve the precision characteristics of the circuit, specifically for small input values after the argument reduction. Synthesis of the circuits and an experimental analysis of the errors have demonstrated the correctness and effectiveness of the implemented cores and allow the designer to choose, for general-purpose applications, a suitable bit-width representation and number of iterations of the CORDIC algorithm.", "title": "" }, { "docid": "e3e4d19aa9a5db85f30698b7800d2502", "text": "In this paper we examine the use of a mathematical procedure, called Principal Component Analysis, in Recommender Systems. The resulting filtering algorithm applies PCA on user ratings and demographic data, aiming to improve various aspects of the recommendation process. After a brief introduction to PCA, we provide a discussion of the proposed PCADemog algorithm, along with possible ways of combining it with different sources of filtering data. The experimental part of this work tests distinct parameterizations for PCA-Demog, identifying those with the best performance. Finally, the paper compares their results with those achieved by other filtering approaches, and draws interesting conclusions.", "title": "" }, { "docid": "b4e3d2f5e4bb1238cb6f4dad5c952c4c", "text": "Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.", "title": "" }, { "docid": "a39364020ec95a3d35dfe929d4a000c0", "text": "The Internet of Things (IoTs) refers to the inter-connection of billions of smart devices. The steadily increasing number of IoT devices with heterogeneous characteristics requires that future networks evolve to provide a new architecture to cope with the expected increase in data generation. Network function virtualization (NFV) provides the scale and flexibility necessary for IoT services by enabling the automated control, management and orchestration of network resources. In this paper, we present a novel NFV enabled IoT architecture targeted for a state-of-the art operating room environment. We use web services based on the representational state transfer (REST) web architecture as the IoT application's southbound interface and illustrate its applicability via two different scenarios.", "title": "" }, { "docid": "6c5cabfa5ee5b9d67ef25658a4b737af", "text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression", "title": "" }, { "docid": "684555a1b5eb0370eebee8cbe73a82ff", "text": "This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.", "title": "" }, { "docid": "0d2b90dad65e01289008177a4ebbbade", "text": "A good test suite is one that detects real faults. Because the set of faults in a program is usually unknowable, this definition is not useful to practitioners who are creating test suites, nor to researchers who are creating and evaluating tools that generate test suites. In place of real faults, testing research often uses mutants, which are artificial faults -- each one a simple syntactic variation -- that are systematically seeded throughout the program under test. Mutation analysis is appealing because large numbers of mutants can be automatically-generated and used to compensate for low quantities or the absence of known real faults. Unfortunately, there is little experimental evidence to support the use of mutants as a replacement for real faults. This paper investigates whether mutants are indeed a valid substitute for real faults, i.e., whether a test suite’s ability to detect mutants is correlated with its ability to detect real faults that developers have fixed. Unlike prior studies, these investigations also explicitly consider the conflating effects of code coverage on the mutant detection rate. Our experiments used 357 real faults in 5 open-source applications that comprise a total of 321,000 lines of code. Furthermore, our experiments used both developer-written and automatically-generated test suites. The results show a statistically significant correlation between mutant detection and real fault detection, independently of code coverage. The results also give concrete suggestions on how to improve mutation analysis and reveal some inherent limitations.", "title": "" }, { "docid": "d8567a34caacdb22a0aea281a1dbbccb", "text": "Traditionally, interference protection is guaranteed through a policy of spectrum licensing, whereby wireless systems get exclusive access to spectrum. This is an effective way to prevent interference, but it leads to highly inefficient use of spectrum. Cognitive radio along with software radio, spectrum sensors, mesh networks, and other emerging technologies can facilitate new forms of spectrum sharing that greatly improve spectral efficiency and alleviate scarcity, if policies are in place that support these forms of sharing. On the other hand, new technology that is inconsistent with spectrum policy will have little impact. This paper discusses policies that can enable or facilitate use of many spectrum-sharing arrangements, where the arrangements are categorized as being based on coexistence or cooperation and as sharing among equals or primary-secondary sharing. A shared spectrum band may be managed directly by the regulator, or this responsibility may be delegated in large part to a license-holder. The type of sharing arrangement and the entity that manages it have a great impact on which technical approaches are viable and effective. The most efficient and cost-effective form of spectrum sharing will depend on the type of systems involved, where systems under current consideration are as diverse as television broadcasters, cellular carriers, public safety systems, point-to-point links, and personal and local-area networks. In addition, while cognitive radio offers policy-makers the opportunity to improve spectral efficiency, cognitive radio also provides new challenges for policy enforcement. A responsible regulator will not allow a device into the marketplace that might harm other systems. Thus, designers must seek innovative ways to assure regulators that new devices will comply with policy requirements and will not cause harmful interference.", "title": "" }, { "docid": "395dcc7c09562f358c07af9c999fbdc7", "text": "Protecting source code against reverse engineering and theft is an important problem. The goal is to carry out computations using confidential algorithms on an untrusted party while ensuring confidentiality of algorithms. This problem has been addressed for Boolean circuits known as ‘circuit privacy’. Circuits corresponding to real-world programs are impractical. Well-known obfuscation techniques are highly practicable, but provide only limited security, e.g., no piracy protection. In this work, we modify source code yielding programs with adjustable performance and security guarantees ranging from indistinguishability obfuscators to (non-secure) ordinary obfuscation. The idea is to artificially generate ‘misleading’ statements. Their results are combined with the outcome of a confidential statement using encrypted selector variables. Thus, an attacker must ‘guess’ the encrypted selector variables to disguise the confidential source code. We evaluated our method using more than ten programmers as well as pattern mining across open source code repositories to gain insights of (micro-)coding patterns that are relevant for generating misleading statements. The evaluation reveals that our approach is effective in that it successfully preserves source code confidentiality.", "title": "" }, { "docid": "5cdb981566dfd741c9211902c0c59d50", "text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.", "title": "" }, { "docid": "2bd3f3e72d99401cdf6f574982bc65ff", "text": "In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies.", "title": "" }, { "docid": "4f6979ca99ec7fb0010fd102e7796248", "text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.", "title": "" }, { "docid": "5565f51ad8e1aaee43f44917befad58a", "text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.", "title": "" }, { "docid": "4daec6170f18cc8896411e808e53355f", "text": "The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.", "title": "" }, { "docid": "a53f26ef068d11ea21b9ba8609db6ddf", "text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "77754266da79a87b99e51b0088888550", "text": "The paper proposed a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the moving and stationary target acquisition and recognition (MSTAR) public release database. First MSTAR image chips are represented as fine and raw feature vectors, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) network as the base learner. Since the RBF network is a binary classifier, the multiclass problem was decomposed into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF network for each binary problem into a code word, which is then \"decoded\" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature", "title": "" }, { "docid": "ba16a6634b415dd2c478c83e1f65cb3c", "text": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.", "title": "" } ]
scidocsrr
097ea71b3e7607aaffe383426ecdcfc4
Two axes orthogonal drive transmission for omnidirectional crawler with surface contact
[ { "docid": "ac644a44b1e8cfe99e49461d37ff74e6", "text": "Holonomic omnidirectional mobile robots are useful because of their high level of mobility in narrow or crowded areas, and omnidirectional robots equipped with normal tires are desired for their ability to surmount difference in level as well as their vibration suppression and ride comfort. A caster-drive mechanism using normal tires has been developed to realize a holonomic omnidiredctional robot, but some problems has remain. Here we describe effective systems to control the caster-drive wheels of an omnidirectional mobile robot. We propose a Differential-Drive Steering System (DDSS) using differential gearing to improve the operation ratio of motors. The DDSS generates driving and steering torque effectively from two motors. Simulation and experimental results show that the proposed system is effective for holonomic omnidirectional mobile robots.", "title": "" }, { "docid": "9b646ef8c6054f9a4d85cf25e83d415c", "text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed", "title": "" } ]
[ { "docid": "0213b953415a2aa9bab63f9c210c3dcf", "text": "Purpose – The purpose of this paper is to distinguish and describe knowledge management (KM) technologies according to their support for strategy. Design/methodology/approach – This study employed an ontology development method to describe the relations between technology, KM and strategy, and to categorize available KM technologies according to those relations. Ontologies are formal specifications of concepts in a domain and their inter-relationships, and can be used to facilitate common understanding and knowledge sharing. The study focused particularly on two sub-domains of the KM field: KM strategies and KM technologies. Findings – ’’KM strategy’’ has three meanings in the literature: approach to KM, knowledge strategy, and KM implementation strategy. Also, KM technologies support strategy via KM initiatives based on particular knowledge strategies and approaches to KM. The study distinguishes three types of KM technologies: component technologies, KM applications, and business applications. They all can be described in terms of ’’creation’’ and ’’transfer’’ knowledge strategies, and ’’personalization’’ and ’’codification’’ approaches to KM. Research limitations/implications – The resulting framework suggests that KM technologies can be analyzed better in the context of KM initiatives, instead of the usual approach associating them with knowledge processes. KM initiatives provide the background and contextual elements necessary to explain technology adoption and use. Practical implications – The framework indicates three alternative modes for organizational adoption of KM technologies: custom development of KM systems from available component technologies; purchase of KM-specific applications; or purchase of business-driven applications that embed KM functionality. It also lists adequate technologies and provides criteria for selection in any of the cases. Originality/value – Among the many studies analyzing the role of technology in KM, an association with strategy has been missing. This paper contributes to filling this gap, integrating diverse contributions via a clearer definition of concepts and a visual representation of their relationships. This use of ontologies as a method, instead of an artifact, is also uncommon in the literature.", "title": "" }, { "docid": "7e40c98b9760e1f47a0140afae567b7f", "text": "Low-level saliency cues or priors do not produce good enough saliency detection results especially when the salient object presents in a low-contrast background with confusing visual appearance. This issue raises a serious problem for conventional approaches. In this paper, we tackle this problem by proposing a multi-context deep learning framework for salient object detection. We employ deep Convolutional Neural Networks to model saliency of objects in images. Global context and local context are both taken into account, and are jointly modeled in a unified multi-context deep learning framework. To provide a better initialization for training the deep neural networks, we investigate different pre-training strategies, and a task-specific pre-training scheme is designed to make the multi-context modeling suited for saliency detection. Furthermore, recently proposed contemporary deep models in the ImageNet Image Classification Challenge are tested, and their effectiveness in saliency detection are investigated. Our approach is extensively evaluated on five public datasets, and experimental results show significant and consistent improvements over the state-of-the-art methods.", "title": "" }, { "docid": "e58036f93195603cb7dc7265b9adeb25", "text": "Pseudomonas aeruginosa thrives in many aqueous environments and is an opportunistic pathogen that can cause both acute and chronic infections. Environmental conditions and host defenses cause differing stresses on the bacteria, and to survive in vastly different environments, P. aeruginosa must be able to adapt to its surroundings. One strategy for bacterial adaptation is to self-encapsulate with matrix material, primarily composed of secreted extracellular polysaccharides. P. aeruginosa has the genetic capacity to produce at least three secreted polysaccharides; alginate, Psl, and Pel. These polysaccharides differ in chemical structure and in their biosynthetic mechanisms. Since alginate is often associated with chronic pulmonary infections, its biosynthetic pathway is the best characterized. However, alginate is only produced by a subset of P. aeruginosa strains. Most environmental and other clinical isolates secrete either Pel or Psl. Little information is available on the biosynthesis of these polysaccharides. Here, we review the literature on the alginate biosynthetic pathway, with emphasis on recent findings describing the structure of alginate biosynthetic proteins. This information combined with the characterization of the domain architecture of proteins encoded on the Psl and Pel operons allowed us to make predictive models for the biosynthesis of these two polysaccharides. The results indicate that alginate and Pel share certain features, including some biosynthetic proteins with structurally or functionally similar properties. In contrast, Psl biosynthesis resembles the EPS/CPS capsular biosynthesis pathway of Escherichia coli, where the Psl pentameric subunits are assembled in association with an isoprenoid lipid carrier. These models and the environmental cues that cause the cells to produce predominantly one polysaccharide over the others are subjects of current investigation.", "title": "" }, { "docid": "518b96236ffa2ce0413a0e01d280937a", "text": "In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint into the low-rankness property of high-dimensional data representation. The symmetric low-rank representation, which preserves the subspace structures of high-dimensional data, guarantees weight consistency for each pair of data points so that highly correlated data points of subspaces are represented together. Moreover, it can be efficiently calculated by solving a convex optimization problem. We provide a rigorous proof for minimizing the nuclear-norm regularized least square problem with a symmetric constraint. The affinity matrix for spectral clustering can be obtained by further exploiting the angular information of the principal directions of the symmetric low-rank representation. This is a critical step towards evaluating the memberships between data points. Experimental results on benchmark databases demonstrate the effectiveness and robustness of LRRSC compared with several state-of-the-art subspace clustering algorithms.", "title": "" }, { "docid": "d4678cdbc3963b44a905947be836d53d", "text": "A multimodal network encodes relationships between the same set of nodes in multiple settings, and network alignment is a powerful tool for transferring information and insight between a pair of networks. We propose a method for multimodal network alignment that computes a matrix which indicates the alignment, but produces the result as a low-rank factorization directly. We then propose new methods to compute approximate maximum weight matchings of low-rank matrices to produce an alignment. We evaluate our approach by applying it on synthetic networks and use it to de-anonymize a multimodal transportation network.", "title": "" }, { "docid": "ed39d4d541eb261e41a4f000347b954b", "text": "In metazoans, gamma-tubulin acts within two main complexes, gamma-tubulin small complexes (gamma-TuSCs) and gamma-tubulin ring complexes (gamma-TuRCs). In higher eukaryotes, it is assumed that microtubule nucleation at the centrosome depends on gamma-TuRCs, but the role of gamma-TuRC components remains undefined. For the first time, we analyzed the function of all four gamma-TuRC-specific subunits in Drosophila melanogaster: Dgrip75, Dgrip128, Dgrip163, and Dgp71WD. Grip-motif proteins, but not Dgp71WD, appear to be required for gamma-TuRC assembly. Individual depletion of gamma-TuRC components, in cultured cells and in vivo, induces mitotic delay and abnormal spindles. Surprisingly, gamma-TuSCs are recruited to the centrosomes. These defects are less severe than those resulting from the inhibition of gamma-TuSC components and do not appear critical for viability. Simultaneous cosilencing of all gamma-TuRC proteins leads to stronger phenotypes and partial recruitment of gamma-TuSC. In conclusion, gamma-TuRCs are required for assembly of fully functional spindles, but we suggest that gamma-TuSC could be targeted to the centrosomes, which is where basic microtubule assembly activities are maintained.", "title": "" }, { "docid": "eb0672f019c82dfe0614b39d3e89be2e", "text": "The support of medical decisions comes from several sources. These include individual physician experience, pathophysiological constructs, pivotal clinical trials, qualitative reviews of the literature, and, increasingly, meta-analyses. Historically, the first of these four sources of knowledge largely informed medical and dental decision makers. Meta-analysis came on the scene around the 1970s and has received much attention. What is meta-analysis? It is the process of combining the quantitative results of separate (but similar) studies by means of formal statistical methods. Statistically, the purpose is to increase the precision with which the treatment effect of an intervention can be estimated. Stated in another way, one can say that meta-analysis combines the results of several studies with the purpose of addressing a set of related research hypotheses. The underlying studies can come in the form of published literature, raw data from individual clinical studies, or summary statistics in reports or abstracts. More broadly, a meta-analysis arises from a systematic review. There are three major components to a systematic review and meta-analysis. The systematic review starts with the formulation of the research question and hypotheses. Clinical or substantive insight about the particular domain of research often identifies not only the unmet investigative needs, but helps prepare for the systematic review by defining the necessary initial parameters. These include the hypotheses, endpoints, important covariates, and exposures or treatments of interest. Like any basic or clinical research endeavor, a prospectively defined and clear study plan enhances the expected utility and applicability of the final results for ultimately influencing practice or policy. After this foundational preparation, the second component, a systematic review, commences. The systematic review proceeds with an explicit and reproducible protocol to locate and evaluate the available data. The collection, abstraction, and compilation of the data follow a more rigorous and prospectively defined objective process. The definitions, structure, and methodologies of the underlying studies must be critically appraised. Hence, both “the content” and “the infrastructure” of the underlying data are analyzed, evaluated, and systematically recorded. Unlike an informal review of the literature, this systematic disciplined approach is intended to reduce the potential for subjectivity or bias in the subsequent findings. Typically, a literature search of an online database is the starting point for gathering the data. The most common sources are MEDLINE (United States Library of Overview, Strengths, and Limitations of Systematic Reviews and Meta-Analyses", "title": "" }, { "docid": "4f1949af3455bd5741e731a9a60ecdf1", "text": "BACKGROUND\nGuava leaf tea (GLT), exhibiting a diversity of medicinal bioactivities, has become a popularly consumed daily beverage. To improve the product quality, a new process was recommended to the Ser-Tou Farmers' Association (SFA), who began field production in 2005. The new process comprised simplified steps: one bud-two leaves were plucked at 3:00-6:00 am, in the early dawn period, followed by withering at ambient temperature (25-28 °C), rolling at 50 °C for 50-70 min, with or without fermentation, then drying at 45-50 °C for 70-90 min, and finally sorted.\n\n\nRESULTS\nThe product manufactured by this new process (named herein GLTSF) exhibited higher contents (in mg g(-1), based on dry ethyl acetate fraction/methanolic extract) of polyphenolics (417.9 ± 12.3) and flavonoids (452.5 ± 32.3) containing a compositional profile much simpler than previously found: total quercetins (190.3 ± 9.1), total myricetin (3.3 ± 0.9), total catechins (36.4 ± 5.3), gallic acid (8.8 ± 0.6), ellagic acid (39.1 ± 6.4) and tannins (2.5 ± 9.1).\n\n\nCONCLUSION\nWe have successfully developed a new process for manufacturing GLTSF with a unique polyphenolic profile. Such characteristic compositional distribution can be ascribed to the right harvesting hour in the early dawn and appropriate treatment process at low temperature, avoiding direct sunlight.", "title": "" }, { "docid": "83b50f380f500bf6e140b3178431f0c6", "text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.", "title": "" }, { "docid": "509fa5630ed7e3e7bd914fb474da5071", "text": "Languages with rich type systems are beginning to employ a blend of type inference and type checking, so that the type inference engine is guided by programmer-supplied type annotations. In this paper we show, for the first time, how to combine the virtues of two well-established ideas: unification-based inference, and bidi-rectional propagation of type annotations. The result is a type system that conservatively extends Hindley-Milner, and yet supports both higher-rank types and impredicativity.", "title": "" }, { "docid": "5fc6b0e151762560c8f09d0fe6983ca2", "text": "The increasing popularity of wearable devices that continuously capture video, and the prevalence of third-party applications that utilize these feeds have resulted in a new threat to privacy. In many situations, sensitive objects/regions are maliciously (or accidentally) captured in a video frame by third-party applications. However, current solutions do not allow users to specify and enforce fine grained access control over video feeds.\n In this paper, we describe MarkIt, a computer vision based privacy marker framework, that allows users to specify and enforce fine grained access control over video feeds. We present two example privacy marker systems -- PrivateEye and WaveOff. We conclude with a discussion of the computer vision, privacy and systems challenges in building a comprehensive system for fine grained access control over video feeds.", "title": "" }, { "docid": "98f814584c555baa05a1292e7e14f45a", "text": "This paper presents two types of dual band (2.4 and 5.8 GHz) wearable planar dipole antennas, one printed on a conventional substrate and the other on a two-dimensional metamaterial surface (Electromagnetic Bandgap (EBG) structure). The operation of both antennas is investigated and compared under different bending conditions (in E and H-planes) around human arm and leg of different radii. A dual band, Electromagnetic Band Gap (EBG) structure on a wearable substrate is used as a high impedance surface to control the Specific Absorption Rate (SAR) as well as to improve the antenna gain up to 4.45 dBi. The EBG inspired antenna has reduced the SAR effects on human body to a safe level (< 2W/Kg). I.e. the SAR is reduced by 83.3% for lower band and 92.8% for higher band as compared to the conventional antenna. The proposed antenna can be used for wearable applications with least health hazard to human body in Industrial, Scientific and Medical (ISM) band (2.4 GHz, 5.2 GHz) applications. The antennas on human body are simulated and analyzed in CST Microwave Studio (CST MWS).", "title": "" }, { "docid": "81aa60b514bb11efb9e137b8d13b92e8", "text": "Linguistic creativity is a marriage of form and content in which each works together to convey our meanings with concision, resonance and wit. Though form clearly influences and shapes our content, the most deft formal trickery cannot compensate for a lack of real insight. Before computers can be truly creative with language, we must first imbue them with the ability to formulate meanings that are worthy of creative expression. This is especially true of computer-generated poetry. If readers are to recognize a poetic turn-of-phrase as more than a superficial manipulation of words, they must perceive and connect with the meanings and the intent behind the words. So it is not enough for a computer to merely generate poem-shaped texts; poems must be driven by conceits that build an affective worldview. This paper describes a conceit-driven approach to computational poetry, in which metaphors and blends are generated for a given topic and affective slant. Subtle inferences drawn from these metaphors and blends can then drive the process of poetry generation. In the same vein, we consider the problem of generating witty insights from the banal truisms of common-sense knowledge bases. Ode to a Keatsian Turn Poetic licence is much more than a licence to frill. Indeed, it is not so much a licence as a contract, one that allows a speaker to subvert the norms of both language and nature in exchange for communicating real insights about some relevant state of affairs. Of course, poetry has norms and conventions of its own, and these lend poems a range of recognizably “poetic” formal characteristics. When used effectively, formal devices such as alliteration, rhyme and cadence can mold our meanings into resonant and incisive forms. However, even the most poetic devices are just empty frills when used only to disguise the absence of real insight. Computer models of poem generation must model more than the frills of poetry, and must instead make these formal devices serve the larger goal of meaning creation. Nonetheless, is often said that we “eat with our eyes”, so that the stylish presentation of food can subtly influence our sense of taste. So it is with poetry: a pleasing form can do more than enhance our recall and comprehension of a meaning – it can also suggest a lasting and profound truth. Experiments by McGlone & Tofighbakhsh (1999, 2000) lend empirical support to this so-called Keats heuristic, the intuitive belief – named for Keats’ memorable line “Beauty is truth, truth beauty” – that a meaning which is rendered in an aesthetically-pleasing form is much more likely to be perceived as truthful than if it is rendered in a less poetic form. McGlone & Tofighbakhsh demonstrated this effect by searching a book of proverbs for uncommon aphorisms with internal rhyme – such as “woes unite foes” – and by using synonym substitution to generate non-rhyming (and thus less poetic) variants such as “troubles unite enemies”. While no significant differences were observed in subjects’ ease of comprehension for rhyming/non-rhyming forms, subjects did show a marked tendency to view the rhyming variants as more truthful expressions of the human condition than the corresponding non-rhyming forms. So a well-polished poetic form can lend even a modestly interesting observation the lustre of a profound insight. An automated approach to poetry generation can exploit this symbiosis of form and content in a number of useful ways. It might harvest interesting perspectives on a given topic from a text corpus, or it might search its stores of commonsense knowledge for modest insights to render in immodest poetic forms. We describe here a system that combines both of these approaches for meaningful poetry generation. As shown in the sections to follow, this system – named Stereotrope – uses corpus analysis to generate affective metaphors for a topic on which it is asked to wax poetic. Stereotrope can be asked to view a topic from a particular affective stance (e.g., view love negatively) or to elaborate on a familiar metaphor (e.g. love is a prison). In doing so, Stereotrope takes account of the feelings that different metaphors are likely to engender in an audience. These metaphors are further integrated to yield tight conceptual blends, which may in turn highlight emergent nuances of a viewpoint that are worthy of poetic expression (see Lakoff and Turner, 1989). Stereotrope uses a knowledge-base of conceptual norms to anchor its understanding of these metaphors and blends. While these norms are the stuff of banal clichés and stereotypes, such as that dogs chase cats and cops eat donuts. we also show how Stereotrope finds and exploits corpus evidence to recast these banalities as witty, incisive and poetic insights. Mutual Knowledge: Norms and Stereotypes Samuel Johnson opined that “Knowledge is of two kinds. We know a subject ourselves, or we know where we can find information upon it.” Traditional approaches to the modelling of metaphor and other figurative devices have typically sought to imbue computers with the former (Fass, 1997). More recently, however, the latter kind has gained traction, with the use of the Web and text corpora to source large amounts of shallow knowledge as it is needed (e.g., Veale & Hao 2007a,b; Shutova 2010; Veale & Li, 2011). But the kind of knowledge demanded by knowledgehungry phenomena such as metaphor and blending is very different to the specialist “book” knowledge so beloved of Johnson. These demand knowledge of the quotidian world that we all tacitly share but rarely articulate in words, not even in the thoughtful definitions of Johnson’s dictionary. Similes open a rare window onto our shared expectations of the world. Thus, the as-as-similes “as hot as an oven”, “as dry as sand” and “as tough as leather” illuminate the expected properties of these objects, while the like-similes “crying like a baby”, “singing like an angel” and “swearing like a sailor” reflect intuitons of how these familiar entities are tacitly expected to behave. Veale & Hao (2007a,b) thus harvest large numbers of as-as-similes from the Web to build a rich stereotypical model of familiar ideas and their salient properties, while Özbal & Stock (2012) apply a similar approach on a smaller scale using Google’s query completion service. Fishelov (1992) argues convincingly that poetic and non-poetic similes are crafted from the same words and ideas. Poetic conceits use familiar ideas in non-obvious combinations, often with the aim of creating semantic tension. The simile-based model used here thus harvests almost 10,000 familiar stereotypes (drawing on a range of ~8,000 features) from both as-as and like-similes. Poems construct affective conceits, but as shown in Veale (2012b), the features of a stereotype can be affectively partitioned as needed into distinct pleasant and unpleasant perspectives. We are thus confident that a stereotype-based model of common-sense knowledge is equal to the task of generating and elaborating affective conceits for a poem. A stereotype-based model of common-sense knowledge requires both features and relations, with the latter showing how stereotypes relate to each other. It is not enough then to know that cops are tough and gritty, or that donuts are sweet and soft; our stereotypes of each should include the cliché that cops eat donuts, just as dogs chew bones and cats cough up furballs. Following Veale & Li (2011), we acquire inter-stereotype relationships from the Web, not by mining similes but by mining questions. As in Özbal & Stock (2012), we target query completions from a popular search service (Google), which offers a smaller, public proxy for a larger, zealously-guarded search query log. We harvest questions of the form “Why do Xs <relation> Ys”, and assume that since each relationship is presupposed by the question (so “why do bikers wear leathers” presupposes that everyone knows that bikers wear leathers), the triple of subject/relation/object captures a widely-held norm. In this way we harvest over 40,000 such norms from the Web. Generating Metaphors, N-Gram Style! The Google n-grams (Brants & Franz, 2006) is a rich source of popular metaphors of the form Target is Source, such as “politicians are crooks”, “Apple is a cult”, “racism is a disease” and “Steve Jobs is a god”. Let src(T) denote the set of stereotypes that are commonly used to describe a topic T, where commonality is defined as the presence of the corresponding metaphor in the Google n-grams. To find metaphors for proper-named entities, we also analyse n-grams of the form stereotype First [Middle] Last, such as “tyrant Adolf Hitler” and “boss Bill Gates”. Thus, e.g.: src(racism) = {problem, disease, joke, sin, poison, crime, ideology, weapon} src(Hitler) = {monster, criminal, tyrant, idiot, madman, vegetarian, racist, ...} Let typical(T) denote the set of properties and behaviors harvested for T from Web similes (see previous section), and let srcTypical(T) denote the aggregate set of properties and behaviors ascribable to T via the metaphors in src(T): (1) srcTypical (T) = M∈src(T) typical(M) We can generate conceits for a topic T by considering not just obvious metaphors for T, but metaphors of metaphors: (2) conceits(T) = src(T) ∪ M∈src(T) src(M) The features evoked by the conceit T as M are given by: (3) salient (T,M) = [srcTypical(T) ∪ typical(T)]", "title": "" }, { "docid": "7000ea96562204dfe2c0c23f7cdb6544", "text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.", "title": "" }, { "docid": "97f3ac1c69b518436c908ffecfffbd18", "text": "The study presented in this paper examines the fit of total quality management (TQM) practices in mediating the relationship between organization strategy and organization performance. By examining TQM in relation to organization strategy, the study seeks to advance the understanding of TQM in a broader context. It also resolves some controversies that appear in the literature concerning the relationship between TQM and differentiation and cost leadership strategies as well as quality and innovation performance. The empirical data for this study was drawn from a survey of 194 middle/senior managers from Australian firms. The analysis was conducted using structural equation modeling (SEM) technique by examining two competing models that represent full and partial mediation. The findings indicate that TQM is positively and significantly related to differentiation strategy, and it only partially mediates the relationship between differentiation strategy and three performance measures (product quality, product innovation, and process innovation). The implication is that TQM needs to be complemented by other resources to more effectively realize the strategy in achieving a high level of performance, particularly innovation. 2004 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "8c46f24d8e710c5fb4e25be76fc5b060", "text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.", "title": "" }, { "docid": "f1c5f6f2bdff251e91df1dbd1e2302b2", "text": "In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented. Keywords—Flow shop, job shop, mixed integer model, representation scheme.", "title": "" }, { "docid": "8f1d7499280f94b92044822c1dd4e59d", "text": "WORK-LIFE BALANCE means bringing work, whether done on the job or at home, and leisure time into balance to live life to its fullest. It doesn’t mean that you spend half of your life working and half of it playing; instead, it means balancing the two to achieve harmony in physical, emotional, and spiritual health. In today’s economy, can nurses achieve work-life balance? Although doing so may be difficult, the consequences to our health can be enormous if we don’t try. This article describes some of the stresses faced by nurses and tips for attaining a healthy balance of work and leisure.", "title": "" }, { "docid": "a144b5969c30808f0314218248c48ed6", "text": "A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from (i) and (ii), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.", "title": "" }, { "docid": "de5fd8ae40a2d078101d5bb1859f689b", "text": "The number and variety of mobile multicast applications are growing at an unprecedented and unanticipated pace. Mobile network providers are in front of a dramatic increase in multicast traffic load, and this growth is forecasted to continue in fifth-generation (5G) networks. The major challenges come from the fact that multicast traffic not only targets groups of end-user devices; it also involves machine-type communications (MTC) for the Internet of Things (IoT). The increase in the MTC load, predicted for 5G, calls into question the effectiveness of the current multimedia broadcast multicast service (MBMS). The aim of this paper is to provide a survey of 5G challenges in the view of effective management of multicast applications, and to identify how to enhance the mobile network architecture to enable multicast applications in future 5G scenarios. By accounting for the presence of both human and machine-related traffic, strengths and weaknesses of the state-of-the-art achievements in multicasting are critically analyzed to provide guidelines for future research on 5G networks and more conscious design choices.", "title": "" } ]
scidocsrr
1433b929b171815ba51b87a2f3459e9b
Automatic video description generation via LSTM with joint two-stream encoding
[ { "docid": "4f58d355a60eb61b1c2ee71a457cf5fe", "text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).", "title": "" }, { "docid": "9734f4395c306763e6cc5bf13b0ca961", "text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.", "title": "" }, { "docid": "7ebff2391401cef25b27d510675e9acd", "text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.", "title": "" }, { "docid": "cd45dd9d63c85bb0b23ccb4a8814a159", "text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization", "title": "" } ]
[ { "docid": "af6b26efef62f3017a0eccc5d2ae3c33", "text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.", "title": "" }, { "docid": "4761b8398018e4a15a1d67a127dd657d", "text": "The increasing popularity of social networks, such as Facebook and Orkut, has raised several privacy concerns. Traditional ways of safeguarding privacy of personal information by hiding sensitive attributes are no longer adequate. Research shows that probabilistic classification techniques can effectively infer such private information. The disclosed sensitive information of friends, group affiliations and even participation in activities, such as tagging and commenting, are considered background knowledge in this process. In this paper, we present a privacy protection tool, called Privometer, that measures the amount of sensitive information leakage in a user profile and suggests self-sanitization actions to regulate the amount of leakage. In contrast to previous research, where inference techniques use publicly available profile information, we consider an augmented model where a potentially malicious application installed in the user's friend profiles can access substantially more information. In our model, merely hiding the sensitive information is not sufficient to protect the user privacy. We present an implementation of Privometer in Facebook.", "title": "" }, { "docid": "f8ecc204d84c239b9f3d544fd8d74a5c", "text": "Storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time. It is a difficult task because it requires first the detection of events from news articles published in different time periods and then the construction of storylines by linking events into coherent news stories. Moreover, each storyline has different hierarchical structures which are dependent across epochs. Existing approaches often ignore the dependency of hierarchical structures in storyline generation. In this paper, we propose an unsupervised Bayesian model, called dynamic storyline detection model, to extract structured representations and evolution patterns of storylines. The proposed model is evaluated on a large scale news corpus. Experimental results show that our proposed model outperforms several baseline approaches.", "title": "" }, { "docid": "d8b19c953cc66b6157b87da402dea98a", "text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.", "title": "" }, { "docid": "285da3b342a3b3bd14fb14bca73914cd", "text": "This paper presents expressions for the waveforms and design equations to satisfy the ZVS/ZDS conditions in the class-E power amplifier, taking into account the MOSFET gate-to-drain linear parasitic capacitance and the drain-to-source nonlinear parasitic capacitance. Expressions are given for power output capability and power conversion efficiency. Design examples are presented along with the PSpice-simulation and experimental waveforms at 2.3 W output power and 4 MHz operating frequency. It is shown from the expressions that the slope of the voltage across the MOSFET gate-to-drain parasitic capacitance during the switch-off state affects the switch-voltage waveform. Therefore, it is necessary to consider the MOSFET gate-to-drain capacitance for achieving the class-E ZVS/ZDS conditions. As a result, the power output capability and the power conversion efficiency are also affected by the MOSFET gate-to-drain capacitance. The waveforms obtained from PSpice simulations and circuit experiments showed the quantitative agreements with the theoretical predictions, which verify the expressions given in this paper.", "title": "" }, { "docid": "175551435f1a4c73110b79e01306412f", "text": "The development of MEMS actuators is rapidly evolving and continuously new progress in terms of efficiency, power and force output is reported. Pneumatic and hydraulic are an interesting class of microactuators that are easily overlooked. Despite the 20 years of research, and hundreds of publications on this topic, these actuators are only popular in microfluidic systems. In other MEMS applications, pneumatic and hydraulic actuators are rare in comparison with electrostatic, thermal or piezo-electric actuators. However, several studies have shown that hydraulic and pneumatic actuators deliver among the highest force and power densities at microscale. It is believed that this asset is particularly important in modern industrial and medical microsystems, and therefore, pneumatic and hydraulic actuators could start playing an increasingly important role. This paper shows an in-depth overview of the developments in this field ranging from the classic inflatable membrane actuators to more complex piston–cylinder and drag-based microdevices. (Some figures in this article are in colour only in the electronic version)", "title": "" }, { "docid": "1675d99203da64eab8f9722b77edaab5", "text": "Estimation of the semantic relatedness between biomedical concepts has utility for many informatics applications. Automated methods fall into two broad categories: methods based on distributional statistics drawn from text corpora, and methods based on the structure of existing knowledge resources. In the former case, taxonomic structure is disregarded. In the latter, semantically relevant empirical information is not considered. In this paper, we present a method that retrofits the context vector representation of MeSH terms by using additional linkage information from UMLS/MeSH hierarchy such that linked concepts have similar vector representations. We evaluated the method relative to previously published physician and coder’s ratings on sets of MeSH terms. Our experimental results demonstrate that the retrofitted word vector measures obtain a higher correlation with physician judgments. The results also demonstrate a clear improvement on the correlation with experts’ ratings from the retrofitted vector representation in comparison to the vector representation without retrofitting.", "title": "" }, { "docid": "47e84cacb4db05a30bedfc0731dd2717", "text": "Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.", "title": "" }, { "docid": "c78a4446be38b8fff2a949cba30a8b65", "text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.", "title": "" }, { "docid": "c5443c3bdfed74fd643e7b6c53a70ccc", "text": "Background\nAbsorbable suture suspension (Silhouette InstaLift, Sinclair Pharma, Irvine, CA) is a novel, minimally invasive system that utilizes a specially manufactured synthetic suture to help address the issues of facial aging, while minimizing the risks associated with historic thread lifting modalities.\n\n\nObjectives\nThe purpose of the study was to assess the safety, efficacy, and patient satisfaction of the absorbable suture suspension system in regards to facial rejuvenation and midface volume enhancement.\n\n\nMethods\nThe first 100 treated patients who underwent absorbable suture suspension, by the senior author, were critically evaluated. Subjects completed anonymous surveys evaluating their experience with the new modality.\n\n\nResults\nSurvey results indicate that absorbable suture suspension is a tolerable (96%) and manageable (89%) treatment that improves age related changes (83%), which was found to be in concordance with our critical review.\n\n\nConclusions\nAbsorbable suture suspension generates high patient satisfaction by nonsurgically lifting mid and lower face and neck skin and has the potential to influence numerous facets of aesthetic medicine. The study provides a greater understanding concerning patient selection, suture trajectory, and possible adjuvant therapies.\n\n\nLevel of Evidence 4", "title": "" }, { "docid": "246866da7509b2a8a2bda734a664de9c", "text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.", "title": "" }, { "docid": "b776b58f6f78e77c81605133c6e4edce", "text": "The phase response of noisy speech has largely been ignored, but recent research shows the importance of phase for perceptual speech quality. A few phase enhancement approaches have been developed. These systems, however, require a separate algorithm for enhancing the magnitude response. In this paper, we present a novel framework for performing monaural speech separation in the complex domain. We show that much structure is exhibited in the real and imaginary components of the short-time Fourier transform, making the complex domain appropriate for supervised estimation. Consequently, we define the complex ideal ratio mask (cIRM) that jointly enhances the magnitude and phase of noisy speech. We then employ a single deep neural network to estimate both the real and imaginary components of the cIRM. The evaluation results show that complex ratio masking yields high quality speech enhancement, and outperforms related methods that operate in the magnitude domain or separately enhance magnitude and phase.", "title": "" }, { "docid": "4783e35e54d0c7f555015427cbdc011d", "text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].", "title": "" }, { "docid": "2ed36e909f52e139b5fd907436e80443", "text": "It is difficult to draw sweeping general conclusions about the blastogenesis of CT, principally because so few thoroughly studied cases are reported. It is to be hoped that methods such as painstaking gross or electronic dissection will increase the number of well-documented cases. Nevertheless, the following conclusions can be proposed: 1. Most CT can be classified into a few main anatomic types (or paradigms), and there are also rare transitional types that show gradation between the main types. 2. Most CT have two full notochordal axes (Fig. 5); the ventral organs induced along these axes may be severely disorientated, malformed, or aplastic in the process of being arranged within one body. Reported anatomic types of CT represent those notochordal arrangements that are compatible with reasonably complete embryogenesis. New ventro-lateral axes are formed in many types of CT because of space constriction in the ventral zones. The new structures represent areas of \"mutual recognition and organization\" rather than \"fusion\" (Fig. 17). 3. Orientations of the pairs of axes in the embryonic disc can be deduced from the resulting anatomy. Except for dicephalus, the axes are not side by side. Notochords are usually \"end-on\" or ventro-ventral in orientation (Fig. 5). 4. A single gastrulation event or only partial duplicated gastrulation event seems to occur in dicephalics, despite a full double notochord. 5. The anatomy of diprosopus requires further clarification, particularly in cases with complete crania rather than anencephaly-equivalent. Diprosopus CT offer the best opportunity to study the effects of true forking of the notochord, if this actually occurs. 6. In cephalothoracopagus, thoracopagus, and ischiopagus, remarkably complete new body forms are constructed at right angles to the notochordal axes. The extent of expression of viscera in these types depends on the degree of noncongruity of their ventro-ventral axes (Figs. 4, 11, 15b). 7. Some organs and tissues fail to develop (interaction aplasia) because of conflicting migrational pathways or abnormal concentrations of morphogens in and around the neoaxes. 8. Where the cardiovascular system is discordantly expressed in dicephalus and thoracopagus twins, the right heart is more severely malformed, depending on the degree of interaction of the two embryonic septa transversa. 9. The septum transversum provides mesenchymal components to the heawrt and liver; the epithelial components (derived fro the foregut[s]) may vary in number from the number of mesenchymal septa transversa contributing to the liver of the CT embryo.(ABSTRACT TRUNCATED AT 400 WORDS)", "title": "" }, { "docid": "33e45b66cca92f15270500c32a1c0b94", "text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.", "title": "" }, { "docid": "9b17dd1fc2c7082fa8daecd850fab91c", "text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.", "title": "" }, { "docid": "a02fb872137fe7bc125af746ba814849", "text": "23% of the total global burden of disease is attributable to disorders in people aged 60 years and older. Although the proportion of the burden arising from older people (≥60 years) is highest in high-income regions, disability-adjusted life years (DALYs) per head are 40% higher in low-income and middle-income regions, accounted for by the increased burden per head of population arising from cardiovascular diseases, and sensory, respiratory, and infectious disorders. The leading contributors to disease burden in older people are cardiovascular diseases (30·3% of the total burden in people aged 60 years and older), malignant neoplasms (15·1%), chronic respiratory diseases (9·5%), musculoskeletal diseases (7·5%), and neurological and mental disorders (6·6%). A substantial and increased proportion of morbidity and mortality due to chronic disease occurs in older people. Primary prevention in adults aged younger than 60 years will improve health in successive cohorts of older people, but much of the potential to reduce disease burden will come from more effective primary, secondary, and tertiary prevention targeting older people. Obstacles include misplaced global health priorities, ageism, the poor preparedness of health systems to deliver age-appropriate care for chronic diseases, and the complexity of integrating care for complex multimorbidities. Although population ageing is driving the worldwide epidemic of chronic diseases, substantial untapped potential exists to modify the relation between chronological age and health. This objective is especially important for the most age-dependent disorders (ie, dementia, stroke, chronic obstructive pulmonary disease, and vision impairment), for which the burden of disease arises more from disability than from mortality, and for which long-term care costs outweigh health expenditure. The societal cost of these disorders is enormous.", "title": "" }, { "docid": "afae66e9ff49274bbb546cd68490e5e4", "text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.", "title": "" }, { "docid": "6d13952afa196a6a77f227e1cc9f43bd", "text": "Spreadsheets contain valuable data on many topics, but they are difficult to integrate with other sources. Converting spreadsheet data to the relational model would allow relational integration tools to be used, but using manual methods to do this requires large amounts of work for each integration candidate. Automatic data extraction would be useful but it is very challenging: spreadsheet designs generally requires human knowledge to understand the metadata being described. Even if it is possible to obtain this metadata information automatically, a single mistake can yield an output relation with a huge number of incorrect tuples. We propose a two-phase semiautomatic system that extracts accurate relational metadata while minimizing user effort. Based on conditional random fields (CRFs), our system enables downstream spreadsheet integration applications. First, the automatic extractor uses hints from spreadsheets’ graphical style and recovered metadata to extract the spreadsheet data as accurately as possible. Second, the interactive repair component identifies similar regions in distinct spreadsheets scattered across large spreadsheet corpora, allowing a user’s single manual repair to be amortized over many possible extraction errors. Through our method of integrating the repair workflow into the extraction system, a human can obtain the accurate extraction with just 31% of the manual operations required by a standard classification based technique. We demonstrate and evaluate our system using two corpora: more than 1,000 spreadsheets published by the US government and more than 400,000 spreadsheets downloaded from the Web.", "title": "" }, { "docid": "1d3b2a5906d7db650db042db9ececed1", "text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.", "title": "" } ]
scidocsrr
f0026a7bfaadac338395d72b2bb48017
Design of an arm exoskeleton with scapula motion for shoulder rehabilitation
[ { "docid": "8eca353064d3b510b32c486e5f26c264", "text": "Theoretical control algorithms are developed and an experimental system is described for 6-dof kinesthetic force/moment feedback to a human operator from a remote system. The remote system is a common six-axis slave manipulator with a force/torque sensor, while the haptic interface is a unique, cable-driven, seven-axis, force/moment-reflecting exoskeleton. The exoskeleton is used for input when motion commands are sent to the robot and for output when force/moment wrenches of contact are reflected to the human operator. This system exists at Wright-Patterson AFB. The same techniques are applicable to a virtual environment with physics models and general haptic interfaces.", "title": "" } ]
[ { "docid": "305cfc6824ec7ac30a08ade2fff66c13", "text": "Psychological research has shown that 'peak-end' effects influence people's retrospective evaluation of hedonic and affective experience. Rather than objectively reviewing the total amount of pleasure or pain during an experience, people's evaluation is shaped by the most intense moment (the peak) and the final moment (end). We describe an experiment demonstrating that peak-end effects can influence a user's preference for interaction sequences that are objectively identical in their overall requirements. Participants were asked to choose which of two interactive sequences of five pages they preferred. Both sequences required setting a total of 25 sliders to target values, and differed only in the distribution of the sliders across the five pages -- with one sequence intended to induce positive peak-end effects, the other negative. The study found that manipulating only the peak or the end of the series did not significantly change preference, but that a combined manipulation of both peak and end did lead to significant differences in preference, even though all series had the same overall effort.", "title": "" }, { "docid": "1fe8f55e2d402c5fe03176cbf83a16c3", "text": "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying sequences of binary logic operations, adding sequences of integers, and sorting sequences of real numbers. Overall performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. When applied to character-level language modelling on the Hutter prize Wikipedia dataset, ACT yields intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could be used to infer segment boundaries in sequence data.", "title": "" }, { "docid": "bb0ef8084d0693d7ea453cd321b13e0b", "text": "Distributed computation is increasingly important for deep learning, and many deep learning frameworks provide built-in support for distributed training. This results in a tight coupling between the neural network computation and the underlying distributed execution, which poses a challenge for the implementation of new communication and aggregation strategies. We argue that decoupling the deep learning framework from the distributed execution framework enables the flexible development of new communication and aggregation strategies. Furthermore, we argue that Ray [12] provides a flexible set of distributed computing primitives that, when used in conjunction with modern deep learning libraries, enable the implementation of a wide range of gradient aggregation strategies appropriate for different computing environments. We show how these primitives can be used to address common problems, and demonstrate the performance benefits empirically.", "title": "" }, { "docid": "e73de1e6f191fef625f75808d7fbfbb1", "text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.", "title": "" }, { "docid": "d4345ee2baaa016fc38ba160e741b8ee", "text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.", "title": "" }, { "docid": "63f20dd528d54066ed0f189e4c435fe7", "text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].", "title": "" }, { "docid": "9423718cce01b45c688066f322b2c2aa", "text": "Currently there are many techniques based on information technology and communication aimed at assessing the performance of students. Data mining applied in the educational field (educational data mining) is one of the most popular techniques that are used to provide feedback with regard to the teaching-learning process. In recent years there have been a large number of open source applications in the area of educational data mining. These tools have facilitated the implementation of complex algorithms for identifying hidden patterns of information in academic databases. The main objective of this paper is to compare the technical features of three open source tools (RapidMiner, Knime and Weka) as used in educational data mining. These features have been compared in a practical case study on the academic records of three engineering programs in an Ecuadorian university. This comparison has allowed us to determine which tool is most effective in terms of predicting student performance.", "title": "" }, { "docid": "11ce5bca8989b3829683430abe2aee47", "text": "Android is the most popular smartphone operating system with a market share of 80%, but as a consequence, also the platform most targeted by malware. To deal with the increasing number of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community, the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a fully automated, publicly available and comprehensive analysis system for Android apps. ANDRUBIS combines static analysis with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage. With ANDRUBIS, we collected a dataset of over 1,000,000 Android apps, including 40% malicious apps. This dataset allows us to discuss trends in malware behavior observed from apps dating back as far as 2010, as well as to present insights gained from operating ANDRUBIS as a publicly available service for the past two years.", "title": "" }, { "docid": "23384db962a1eb524f40ca52f4852b14", "text": "Recent developments in Artificial Intelligence (AI) have generated a steep interest from media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an important focus of research and development is understanding the ethical impact of these systems. What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI. Contrary to the frightening images of a dystopic future in media and popular fiction, where AI systems dominate the world and is mostly concerned with warfare, AI is already changing our daily lives mostly in ways that improve human health, safety, and productivity (Stone et al. 2016). This is the case in domain such as transportation; service robots; health-care; education; public safety and security; and entertainment. Nevertheless, and in order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems1, the Foundation for Responsible Robotics2, and the Partnership on AI3 amongst several others. As the capabilities for autonomous decision making grow, perhaps the most important issue to consider is the need to rethink responsibility (Dignum 2017). Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine about such issues that we consider to have ethical impact, but most importantly, we need frameworks to guide design choices, to regulate the reaches of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement. Values are dependent on the socio-cultural context (Turiel 2002), and are often only implicit in deliberation processes, which means that methodologies are needed to elicit the values held by all the stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems. That is, AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. Responsible Artificial Intelligence is about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and wellbeing in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental", "title": "" }, { "docid": "d66799a5d65a6f23527a33b124812ea6", "text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.", "title": "" }, { "docid": "45c1119cd76ed4f1470ac398caf6d192", "text": "UNLABELLED\nL-3,4-Dihydroxy-6-(18)F-fluoro-phenyl-alanine ((18)F-FDOPA) is an amino acid analog used to evaluate presynaptic dopaminergic neuronal function. Evaluation of tumor recurrence in neurooncology is another application. Here, the kinetics of (18)F-FDOPA in brain tumors were investigated.\n\n\nMETHODS\nA total of 37 patients underwent 45 studies; 10 had grade IV, 10 had grade III, and 13 had grade II brain tumors; 2 had metastases; and 2 had benign lesions. After (18)F-DOPA was administered at 1.5-5 MBq/kg, dynamic PET images were acquired for 75 min. Images were reconstructed with iterative algorithms, and corrections for attenuation and scatter were applied. Images representing venous structures, the striatum, and tumors were generated with factor analysis, and from these, input and output functions were derived with simple threshold techniques. Compartmental modeling was applied to estimate rate constants.\n\n\nRESULTS\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors and the cerebellum but not the striatum. A 3-compartment model with corrections for tissue blood volume, metabolites, and partial volume appeared to be superior for describing (18)F-FDOPA kinetics in tumors and the striatum. A significant correlation was found between influx rate constant K and late uptake (standardized uptake value from 65 to 75 min), whereas the correlation of K with early uptake was weak. High-grade tumors had significantly higher transport rate constant k(1), equilibrium distribution volumes, and influx rate constant K than did low-grade tumors (P < 0.01). Tumor uptake showed a maximum at about 15 min, whereas the striatum typically showed a plateau-shaped curve. Patlak graphical analysis did not provide accurate parameter estimates. Logan graphical analysis yielded reliable estimates of the distribution volume and could separate newly diagnosed high-grade tumors from low-grade tumors.\n\n\nCONCLUSION\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors in a first approximation. A 3-compartment model with corrections for metabolites and partial volume could adequately describe (18)F-FDOPA kinetics in tumors, the striatum, and the cerebellum. This model suggests that (18)F-FDOPA was transported but not trapped in tumors, unlike in the striatum. The shape of the uptake curve appeared to be related to tumor grade. After an early maximum, high-grade tumors had a steep descending branch, whereas low-grade tumors had a slowly declining curve, like that for the cerebellum but on a higher scale.", "title": "" }, { "docid": "403310053251e81cdad10addedb64c87", "text": "Many types of data are best analyzed by fitting a curve using nonlinear regression, and computer programs that perform these calculations are readily available. Like every scientific technique, however, a nonlinear regression program can produce misleading results when used inappropriately. This article reviews the use of nonlinear regression in a practical and nonmathematical manner to answer the following questions: Why is nonlinear regression superior to linear regression of transformed data? How does nonlinear regression differ from polynomial regression and cubic spline? How do nonlinear regression programs work? What choices must an investigator make before performing nonlinear regression? What do the final results mean? How can two sets of data or two fits to one set of data be compared? What problems can cause the results to be wrong? This review is designed to demystify nonlinear regression so that both its power and its limitations will be appreciated.", "title": "" }, { "docid": "32e1b7734ba1b26a6a27e0504db07643", "text": "Due to its high popularity and rich functionalities, the Portable Document Format (PDF) has become a major vector for malware propagation. To detect malicious PDF files, the first step is to extract and de-obfuscate Java Script codes from the document, for which an effective technique is yet to be created. However, existing static methods cannot de-obfuscate Java Script codes, existing dynamic methods bring high overhead, and existing hybrid methods introduce high false negatives. Therefore, in this paper, we present MPScan, a scanner that combines dynamic Java Script de-obfuscation and static malware detection. By hooking the Adobe Reader's native Java Script engine, Java Script source code and op-code can be extracted on the fly after the source code is parsed and then executed. We also perform a multilevel analysis on the resulting Java Script strings and op-code to detect malware. Our evaluation shows that regardless of obfuscation techniques, MPScan can effectively de-obfuscate and detect 98% malicious PDF samples.", "title": "" }, { "docid": "4f287c788c7e95bf350a998650ff6221", "text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.", "title": "" }, { "docid": "f066cb3e2fc5ee543e0cc76919b261eb", "text": "Eco-labels are part of a new wave of environmental policy that emphasizes information disclosure as a tool to induce environmentally friendly behavior by both firms and consumers. Little consensus exists as to whether eco-certified products are actually better than their conventional counterparts. This paper seeks to understand the link between eco-certification and product quality. We use data from three leading wine rating publications (Wine Advocate, Wine Enthusiast, and Wine Spectator) to assess quality for 74,148 wines produced in California between 1998 and 2009. Our results indicate that eco-certification is associated with a statistically significant increase in wine quality rating.", "title": "" }, { "docid": "4d3b988de22e4630e1b1eff9e0d4551b", "text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.", "title": "" }, { "docid": "1444a4acc00c1d7d69a906f6e5f52a6d", "text": "The prevalence of obesity among children is high and is increasing. We know that obesity runs in families, with children of obese parents at greater risk of developing obesity than children of thin parents. Research on genetic factors in obesity has provided us with estimates of the proportion of the variance in a population accounted for by genetic factors. However, this research does not provide information regarding individual development. To design effective preventive interventions, research is needed to delineate how genetics and environmental factors interact in the etiology of childhood obesity. Addressing this question is especially challenging because parents provide both genes and environment for children. An enormous amount of learning about food and eating occurs during the transition from the exclusive milk diet of infancy to the omnivore's diet consumed by early childhood. This early learning is constrained by children's genetic predispositions, which include the unlearned preference for sweet tastes, salty tastes, and the rejection of sour and bitter tastes. Children also are predisposed to reject new foods and to learn associations between foods' flavors and the postingestive consequences of eating. Evidence suggests that children can respond to the energy density of the diet and that although intake at individual meals is erratic, 24-hour energy intake is relatively well regulated. There are individual differences in the regulation of energy intake as early as the preschool period. These individual differences in self-regulation are associated with differences in child-feeding practices and with children's adiposity. This suggests that child-feeding practices have the potential to affect children's energy balance via altering patterns of intake. Initial evidence indicates that imposition of stringent parental controls can potentiate preferences for high-fat, energy-dense foods, limit children's acceptance of a variety of foods, and disrupt children's regulation of energy intake by altering children's responsiveness to internal cues of hunger and satiety. This can occur when well-intended but concerned parents assume that children need help in determining what, when, and how much to eat and when parents impose child-feeding practices that provide children with few opportunities for self-control. Implications of these findings for preventive interventions are discussed.", "title": "" }, { "docid": "ff50d07261681dcc210f01593ad2c109", "text": "A mathematical model of the system composed of two sensors, the semicircular canal and the sacculus, is suggested. The model is described by three lines of blocks, each line of which has the following structure: a biomechanical block, a mechanoelectrical transduction mechanism, and a block describing the hair cell ionic currents and membrane potential dynamics. The response of this system to various stimuli (head rotation under gravity and falling) is investigated. Identification of the model parameters was done with the experimental data obtained for the axolotl (Ambystoma tigrinum) at the Institute of Physiology, Autonomous University of Puebla, Mexico. Comparative analysis of the semicircular canal and sacculus membrane potentials is presented.", "title": "" }, { "docid": "23d7eb4d414e4323c44121040c3b2295", "text": "BACKGROUND\nThe use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality.\n\n\nOBJECTIVE\nTo describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine.\n\n\nRESULTS\nThe recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality.\n\n\nCONCLUSIONS\nAlthough the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.", "title": "" } ]
scidocsrr
624e607dbd27503e328cfd000f7b9ac3
A Novel Variable Reluctance Resolver with Nonoverlapping Tooth–Coil Windings
[ { "docid": "94cb308e7b39071db4eda05c5ff16d95", "text": "A resolver generates a pair of signals proportional to the sine and cosine of the angular position of its shaft. A new low-cost method for converting the amplitudes of these sine/cosine transducer signals into a measure of the input angle without using lookup tables is proposed. The new method takes advantage of the components used to operate the resolver, the excitation (carrier) signal in particular. This is a feedforward method based on comparing the amplitudes of the resolver signals to those of the excitation signal together with another shifted by pi/2. A simple method is then used to estimate the shaft angle through this comparison technique. The poor precision of comparison of the signals around their highly nonlinear peak regions is avoided by using a simple technique that relies only on the alternating pseudolinear segments of the signals. This results in a better overall accuracy of the converter. Beside simplicity of implementation, the proposed scheme offers the advantage of robustness to amplitude fluctuation of the transducer excitation signal.", "title": "" }, { "docid": "b40b81e25501b08a07c64f68c851f4a6", "text": "Variable reluctance (VR) resolver is widely used in traction motor for battery electric vehicle as well as hybrid electric vehicle as a rotor position sensor. VR resolver generates absolute position signal by using resolver-to-digital converter (RDC) in order to deliver exact position of permanent magnets in a rotor of traction motor to motor controller. This paper deals with fault diagnosis of VR resolver by using co-simulation analysis with RDC for position angle detection. As fault conditions, eccentricity of VR resolver, short circuit condition of excitation coil and output signal coils, and material problem of silicon steel in a view point of permeability are considered. 2D FEM is used for the output signal waveforms of SIN, COS and these waveforms are converted into absolute position angle by using the algorithm of RDC. For the verification of proposed analysis results, experiment on fault conditions was conducted and compared with simulation ones.", "title": "" } ]
[ { "docid": "e7230519f0bd45b70c1cbd42f09cb9e8", "text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.", "title": "" }, { "docid": "2fbfe1fa8cda571a931b700cbb18f46e", "text": "A low-noise front-end and its controller are proposed for capacitive touch screen panels. The proposed front-end circuit based on a ΔΣ ADC uses differential sensing and integration scheme to maximize the input dynamic range. In addition, supply and internal reference voltage noise are effectively removed in the sensed touch signal. Furthermore, the demodulation process in front of the ΔΣ ADC provides the maximized oversampling ratio (OSR) so that the scan rate can be increased at the targeted resolution. The proposed IC is implemented in a mixed-mode 0.18-μm CMOS process. The measurement is performed on a bar-patterned 4.3-inch touch screen panel with 12 driving lines and 8 sensing channels. The report rate is 100 Hz, and SNR and spatial jitter are 54 dB and 0.11 mm, respectively. The chip area is 3 × 3 mm2 and total power consumption is 2.9 mW with 1.8-V and 3.3-V supply.", "title": "" }, { "docid": "8ae1ef032c0a949aa31b3ca8bc024cb5", "text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital", "title": "" }, { "docid": "d909528f98e49f8107bf0cee7a83bbfe", "text": "INTRODUCTION\nThe increasing use of cone-beam computed tomography in orthodontics has been coupled with heightened concern about the long-term risks of x-ray exposure in orthodontic populations. An industry response to this has been to offer low-exposure alternative scanning options in newer cone-beam computed tomography models.\n\n\nMETHODS\nEffective doses resulting from various combinations of field of view size and field location comparing child and adult anthropomorphic phantoms with the recently introduced i-CAT FLX cone-beam computed tomography unit (Imaging Sciences, Hatfield, Pa) were measured with optical stimulated dosimetry using previously validated protocols. Scan protocols included high resolution (360° rotation, 600 image frames, 120 kV[p], 5 mA, 7.4 seconds), standard (360°, 300 frames, 120 kV[p], 5 mA, 3.7 seconds), QuickScan (180°, 160 frames, 120 kV[p], 5 mA, 2 seconds), and QuickScan+ (180°, 160 frames, 90 kV[p], 3 mA, 2 seconds). Contrast-to-noise ratio was calculated as a quantitative measure of image quality for the various exposure options using the QUART DVT phantom.\n\n\nRESULTS\nChild phantom doses were on average 36% greater than adult phantom doses. QuickScan+ protocols resulted in significantly lower doses than standard protocols for the child (P = 0.0167) and adult (P = 0.0055) phantoms. The 13 × 16-cm cephalometric fields of view ranged from 11 to 85 μSv in the adult phantom and 18 to 120 μSv in the child phantom for the QuickScan+ and standard protocols, respectively. The contrast-to-noise ratio was reduced by approximately two thirds when comparing QuickScan+ with standard exposure parameters.\n\n\nCONCLUSIONS\nQuickScan+ effective doses are comparable with conventional panoramic examinations. Significant dose reductions are accompanied by significant reductions in image quality. However, this trade-off might be acceptable for certain diagnostic tasks such as interim assessment of treatment results.", "title": "" }, { "docid": "6f56fca8d3df57619866d9520f79e1a8", "text": "This paper explores how the remaining useful life (RUL) can be assessed for complex systems whose internal state variables are either inaccessible to sensors or hard to measure under operational conditions. Consequently, inference and estimation techniques need to be applied on indirect measurements, anticipated operational conditions, and historical data for which a Bayesian statistical approach is suitable. Models of electrochemical processes in the form of equivalent electric circuit parameters were combined with statistical models of state transitions, aging processes, and measurement fidelity in a formal framework. Relevance vector machines (RVMs) and several different particle filters (PFs) are examined for remaining life prediction and for providing uncertainty bounds. Results are shown on battery data.", "title": "" }, { "docid": "b32b16971f9dd1375785a85617b3bd2a", "text": "White matter hyperintensities (WMHs) in the brain are the consequence of cerebral small vessel disease, and can easily be detected on MRI. Over the past three decades, research has shown that the presence and extent of white matter hyperintense signals on MRI are important for clinical outcome, in terms of cognitive and functional impairment. Large, longitudinal population-based and hospital-based studies have confirmed a dose-dependent relationship between WMHs and clinical outcome, and have demonstrated a causal link between large confluent WMHs and dementia and disability. Adequate differential diagnostic assessment and management is of the utmost importance in any patient, but most notably those with incipient cognitive impairment. Novel imaging techniques such as diffusion tensor imaging might reveal subtle damage before it is visible on standard MRI. Even in Alzheimer disease, which is thought to be primarily caused by amyloid, vascular pathology, such as small vessel disease, may be of greater importance than amyloid itself in terms of influencing the disease course, especially in older individuals. Modification of risk factors for small vessel disease could be an important therapeutic goal, although evidence for effective interventions is still lacking. Here, we provide a timely Review on WMHs, including their relationship with cognitive decline and dementia.", "title": "" }, { "docid": "dfccff16f4600e8cc297296481e50b7b", "text": "Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes' trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.", "title": "" }, { "docid": "3f206b161dc55aea204dda594127bf3d", "text": "A key challenge in fine-grained recognition is how to find and represent discriminative local regions. Recent attention models are capable of learning discriminative region localizers only from category labels with reinforcement learning. However, not utilizing any explicit part information, they are not able to accurately find multiple distinctive regions. In this work, we introduce an attribute-guided attention localization scheme where the local region localizers are learned under the guidance of part attribute descriptions. By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm. The attribute labeling requirement of the scheme is more amenable than the accurate part location annotation required by traditional part-based fine-grained recognition methods. Experimental results on the CUB-200-2011 dataset [1] demonstrate the superiority of the proposed scheme on both fine-grained recognition and attribute recognition.", "title": "" }, { "docid": "c4387f3c791acc54d0a0655221947c8b", "text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.", "title": "" }, { "docid": "52fd33335eb177f989ae1b754527327a", "text": "For robot tutors, autonomy and personalizations are important factors in order to engage users as well as to personalize the content and interaction according to the needs of individuals. Œis paper presents the Programming Cognitive Robot (ProCRob) so‰ware architecture to target personalized social robotics in two complementary ways. ProCRob supports the development and personalization of social robot applications by teachers and therapists without computer programming background. It also supports the development of autonomous robots which can adapt according to the human-robot interaction context. ProCRob is based on our previous research on autonomous robotics and has been developed since 2015 by a multi-disciplinary team of researchers from the €elds of AI, Robotics and Psychology as well as artists and designers at the University of Luxembourg. ProCRob is currently being used and further developed for therapy of children with autism, and for encouraging rehabilitation activities in patients with post-stroke. Œis paper presents a summary of ProCRob and its application in autism.", "title": "" }, { "docid": "5da804fa4c1474e27a1c91fcf5682e20", "text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]", "title": "" }, { "docid": "a44264e4c382204606fdb140ab485617", "text": "Atrophoderma vermiculata is a rare genodermatosis with usual onset in childhood, characterized by a \"honey-combed\" reticular atrophy of the cheeks. The course is generally slow, with progressive worsening. We report successful treatment of 2 patients by means of the carbon dioxide and 585 nm pulsed dye lasers.", "title": "" }, { "docid": "ac08bc7d30b03fcb5cbe9f6354235ccd", "text": "The type III secretion (T3S) pathway allows bacteria to inject effector proteins into the cytosol of target animal or plant cells. T3S systems evolved into seven families that were distributed among Gram-negative bacteria by horizontal gene transfer. There are probably a few hundred effectors interfering with control and signaling in eukaryotic cells and offering a wealth of new tools to cell biologists.", "title": "" }, { "docid": "e96cf46cc99b3eff60d32f3feb8afc47", "text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).", "title": "" }, { "docid": "42d861f1b332db23e5dca67b6247828d", "text": "Information systems and intelligent knowledge processing are playing an increasing role in business, science and technology. Recently, advanced information systems have evolved to facilitate the co-evolution of human and information networks within communities. These advanced information systems use various paradigms including artificial intelligence, knowledge management, and neural science as well as conventional information processing paradigms.", "title": "" }, { "docid": "db0581e9f46516ee1ed26937bbec515b", "text": "In this paper we address the problem of offline Arabic handwriting word recognition. Offline recognition of handwritten words is a difficult task due to the high variability and uncertainty of human writing. The majority of the recent systems are constrained by the size of the lexicon to deal with and the number of writers. In this paper, we propose an approach for multi-writers Arabic handwritten words recognition using multiple Bayesian networks. First, we cut the image in several blocks. For each block, we compute a vector of descriptors. Then, we use K-means to cluster the low-level features including Zernik and Hu moments. Finally, we apply four variants of Bayesian networks classifiers (Naïve Bayes, Tree Augmented Naïve Bayes (TAN), Forest Augmented Naïve Bayes (FAN) and DBN (dynamic bayesian network) to classify the whole image of tunisian city name. The results demonstrate FAN and DBN outperform good recognition rates.", "title": "" }, { "docid": "6f6733c35f78b00b771cf7099c953954", "text": "This paper proposes an asymmetrical pulse width modulation (APWM) with frequency tracking control of full bridge series resonant inverter for induction heating application. In this method, APWM is used as power regulation, and phased locked loop (PLL) is used to attain zero-voltage-switching (ZVS) over a wide load range. The complete closed loop control model is obtained using small signal analysis. The validity of the proposed control is verified by simulation results.", "title": "" }, { "docid": "5e0bcb6cf54879c65e9da7a08d97bc6b", "text": "The present study made an attempt to analyze the existing buying behaviour of Instant Food Products by individual households and to predict the demand for Instant Food Products of Hyderabad city in Andra Padesh .All the respondents were aware of pickles and Sambar masala but only 56.67 per cent of respondents were aware of Dosa/Idli mix. About 96.11 per cent consumers of Dosa/Idli mix and more than half of consumers of pickles and Sambar masala prepared their own. Low cost of home preparation and differences in tastes were the major reasons for non consumption, whereas ready availability and save time of preparation were the reasons for consuming Instant Food Products. Retail shops are the major source of information and source of purchase of Instant Food Products. The average monthly expenditure on Instant Food Products was found to be highest in higher income groups. The average per capita purchase and per capita expenditure on Instant food Products had a positive relationship with income of households.High price and poor taste were the reasons for not purchasing particular brand whereas best quality, retailers influence and ready availability were considered for preferring particular brand of products by the consumers.", "title": "" }, { "docid": "8bd367e82f7a5c046f6887c5edbf51c5", "text": "Internet of Things (IoT) is a fast-growing innovation that will greatly change the way humans live. It can be thought of as the next big step in Internet technology. What really enable IoT to be a possibility are the various technologies that build it up. The IoT architecture mainly requires two types of technologies: data acquisition technologies and networking technologies. Many technologies are currently present that aim to serve as components to the IoT paradigm. This paper aims to categorize the various technologies present that are commonly used by Internet of Things.", "title": "" }, { "docid": "b91e67b9ae7dbad0100c0fa98d2203e5", "text": "We develop a flexible Conditional Random Field framework for supervised preference aggregation, which combines preferences from multiple experts over items to form a distribution over rankings. The distribution is based on an energy comprised of unary and pairwise potentials allowing us to effectively capture correlations between both items and experts. We describe procedures for learning in this modelnand demonstrate that inference can be done much more efficiently thannin analogous models. Experiments on benchmark tasks demonstrate significant performance gains over existing rank aggregation methods.", "title": "" } ]
scidocsrr
62b8ef39d2ec05c9aee2b4445c1e5c4e
A Large-Displacement 3-DOF Flexure Parallel Mechanism with Decoupled Kinematics Structure
[ { "docid": "f7f90e224c71091cc3e6356ab1ec0ea5", "text": "A new two-degrees-of-freedom (2-DOF) compliant parallel micromanipulator (CPM) utilizing flexure joints has been proposed for two-dimensional (2-D) nanomanipulation in this paper. The system is developed by a careful design and proper selection of electrical and mechanical components. Based upon the developed PRB model, both the position and velocity kinematic modelings have been performed in details, and the CPM's workspace area is determined analytically in view of the physical constraints imposed by pizeo-actuators and flexure hinges. Moreover, in order to achieve a maximum workspace subjected to the given dexterity indices, kinematic optimization of the design parameters has been carried out, which leads to a manipulator satisfying the requirement of this work. Simulation results reveal that the designed CPM can perform a high dexterous manipulation within its workspace.", "title": "" } ]
[ { "docid": "816575ea7f7903784abba96180190ea3", "text": "The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees.", "title": "" }, { "docid": "59daeea2c602a1b1d64bae95185f9505", "text": "Traumatic brain injury (TBI) triggers endoplasmic reticulum (ER) stress and impairs autophagic clearance of damaged organelles and toxic macromolecules. In this study, we investigated the effects of the post-TBI administration of docosahexaenoic acid (DHA) on improving hippocampal autophagy flux and cognitive functions of rats. TBI was induced by cortical contusion injury in Sprague–Dawley rats, which received DHA (16 mg/kg in DMSO, intraperitoneal administration) or vehicle DMSO (1 ml/kg) with an initial dose within 15 min after the injury, followed by a daily dose for 3 or 7 days. First, RT-qPCR reveals that TBI induced a significant elevation in expression of autophagy-related genes in the hippocampus, including SQSTM1/p62 (sequestosome 1), lysosomal-associated membrane proteins 1 and 2 (Lamp1 and Lamp2), and cathepsin D (Ctsd). Upregulation of the corresponding autophagy-related proteins was detected by immunoblotting and immunostaining. In contrast, the DHA-treated rats did not exhibit the TBI-induced autophagy biogenesis and showed restored CTSD protein expression and activity. T2-weighted images and diffusion tensor imaging (DTI) of ex vivo brains showed that DHA reduced both gray matter and white matter damages in cortical and hippocampal tissues. DHA-treated animals performed better than the vehicle control group on the Morris water maze test. Taken together, these findings suggest that TBI triggers sustained stimulation of autophagy biogenesis, autophagy flux, and lysosomal functions in the hippocampus. Swift post-injury DHA administration restores hippocampal lysosomal biogenesis and function, demonstrating its therapeutic potential.", "title": "" }, { "docid": "3732f96144d7f28c88670dd63aff63a1", "text": "The problem of defining and classifying power system stability has been addressed by several previous CIGRE and IEEE Task Force reports. These earlier efforts, however, do not completely reflect current industry needs, experiences and understanding. In particular, the definitions are not precise and the classifications do not encompass all practical instability scenarios. This report developed by a Task Force, set up jointly by the CIGRE Study Committee 38 and the IEEE Power System Dynamic Performance Committee, addresses the issue of stability definition and classification in power systems from a fundamental viewpoint and closely examines the practical ramifications. The report aims to define power system stability more precisely, provide a systematic basis for its classification, and discuss linkages to related issues such as power system reliability and security.", "title": "" }, { "docid": "50d0b1e141bcea869352c9b96b0b2ad5", "text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.", "title": "" }, { "docid": "b9400c6d317f60dc324877d3a739fd17", "text": "The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a “defect” (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although dozens of effect size statistics have been available for some time, many researchers were trained at a time when effect sizes were not emphasized, or perhaps even taught. Consequently, some readers may appreciate a review of how to estimate and interpret various effect sizes. In addition to the tutorial, the authors recommend effect size interpretations that emphasize direct and explicit comparisons of effects in a new study with those reported in the prior related literature, with a focus on evaluating result replicability.", "title": "" }, { "docid": "d1c2936521b0a3270163ea4d9123e4da", "text": "Large-scale instance-level image retrieval aims at retrieving specific instances of objects or scenes. Simultaneously retrieving multiple objects in a test image adds to the difficulty of the problem, especially if the objects are visually similar. This paper presents an efficient approach for per-exemplar multi-label image classification, which targets the recognition and localization of products in retail store images. We achieve runtime efficiency through the use of discriminative random forests, deformable dense pixel matching and genetic algorithm optimization. Cross-dataset recognition is performed, where our training images are taken in ideal conditions with only one single training image per product label, while the evaluation set is taken using a mobile phone in real-life scenarios in completely different conditions. In addition, we provide a large novel dataset and labeling tools for products image search, to motivate further research efforts on multi-label retail products image classification. The proposed approach achieves promising results in terms of both accuracy and runtime efficiency on 680 annotated images of our dataset, and 885 test images of GroZi-120 dataset. We make our dataset of 8350 different product images and the 680 test images from retail stores with complete annotations available to the wider community.", "title": "" }, { "docid": "5db123f7b584b268f908186c67d3edcb", "text": "From the point of view of a programmer, the robopsychology is a synonym for the activity is done by developers to implement their machine learning applications. This robopsychological approach raises some fundamental theoretical questions of machine learning. Our discussion of these questions is constrained to Turing machines. Alan Turing had given an algorithm (aka the Turing Machine) to describe algorithms. If it has been applied to describe itself then this brings us to Turing’s notion of the universal machine. In the present paper, we investigate algorithms to write algorithms. From a pedagogy point of view, this way of writing programs can be considered as a combination of learning by listening and learning by doing due to it is based on applying agent technology and machine learning. As the main result we introduce the problem of learning and then we show that it cannot easily be handled in reality therefore it is reasonable to use machine learning algorithm for learning Turing machines.", "title": "" }, { "docid": "fc3aeb32f617f7a186d41d56b559a2aa", "text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.", "title": "" }, { "docid": "66d5101d55595754add37e9e50952056", "text": "The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines", "title": "" }, { "docid": "b43c4d5d97120963a3ea84a01d029819", "text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.", "title": "" }, { "docid": "1b347401820c826db444cc3580bde210", "text": "Utilization of Natural Fibers in Plastic Composites: Problems and Opportunities Roger M. Rowell, Anand R, Sanadi, Daniel F. Caulfield and Rodney E. Jacobson Forest Products Laboratory, ESDA, One Gifford Pinchot Drive, Madison, WI 53705 Department of Forestry, 1630 Linden Drive, University of Wisconsin, WI 53706 recycled. Results suggest that agro-based fibers are a viable alternative to inorganic/material based reinforcing fibers in commodity fiber-thermoplastic composite materials as long as the right processing conditions are used and for applications where higher water absorption may be so critical. These renewable fibers hav low densities and high specific properties and their non-abrasive nature permits a high volume of filling in the composite. Kenaf fivers, for example, have excellent specific properties and have potential to be outstanding reinforcing fillers in plastics. In our experiments, several types of natural fibers were blended with polyprolylene(PP) and then injection molded, with the fiber weight fractions varying to 60%. A compatibilizer or a coupling agent was used to improve the interaction and adhesion between the non-polar matrix and the polar lignocellulosic fibers. The specific tensile and flexural moduli of a 50% by weight (39% by volume) of kenaf-PP composites compares favorably with 40% by weight of glass fiber (19% by volume)-PP injection molded composites. Furthermore, prelimimary results sugget that natural fiber-PP composites can be regrounded and", "title": "" }, { "docid": "701ddde2a7ff66c6767a2978ce7293f2", "text": "Epigenetics is the study of heritable changesin gene expression that does not involve changes to theunderlying DNA sequence, i.e. a change in phenotype notinvolved by a change in genotype. At least three mainfactor seems responsible for epigenetic change including DNAmethylation, histone modification and non-coding RNA, eachone sharing having the same property to affect the dynamicof the chromatin structure by acting on Nucleosomes position. A nucleosome is a DNA-histone complex, where around150 base pairs of double-stranded DNA is wrapped. Therole of nucleosomes is to pack the DNA into the nucleusof the Eukaryote cells, to form the Chromatin. Nucleosomepositioning plays an important role in gene regulation andseveral studies shows that distinct DNA sequence featureshave been identified to be associated with nucleosomepresence. Starting from this suggestion, the identificationof nucleosomes on a genomic scale has been successfullyperformed by DNA sequence features representation andclassical supervised classification methods such as SupportVector Machines, Logistic regression and so on. Taking inconsideration the successful application of the deep neuralnetworks on several challenging classification problems, inthis paper we want to study how deep learning network canhelp in the identification of nucleosomes.", "title": "" }, { "docid": "e4ce5d47a095fcdadbe5c16bb90445d4", "text": "Artificial neural network (ANN) has been widely applied in flood forecasting and got good results. However, it can still not go beyond one or two hidden layers for the problematic non-convex optimization. This paper proposes a deep learning approach by integrating stacked autoencoders (SAE) and back propagation neural networks (BPNN) for the prediction of stream flow, which simultaneously takes advantages of the powerful feature representation capability of SAE and superior predicting capacity of BPNN. To further improve the non-linearity simulation capability, we first classify all the data into several categories by the K-means clustering. Then, multiple SAE-BP modules are adopted to simulate their corresponding categories of data. The proposed approach is respectively compared with the support-vector-machine (SVM) model, the BP neural network model, the RBF neural network model and extreme learning machine (ELM) model. The experimental results show that the SAE-BP integrated algorithm performs much better than other benchmarks.", "title": "" }, { "docid": "348f9c689c579cf07085b6e263c53ff5", "text": "Over recent years, interest has been growing in Bitcoin, an innovation which has the potential to play an important role in e-commerce and beyond. The aim of our paper is to provide a comprehensive empirical study of the payment and investment features of Bitcoin and their implications for the conduct of ecommerce. Since network externality theory suggests that the value of a network and its take-up are interlinked, we investigate both adoption and price formation. We discover that Bitcoin returns are driven primarily by its popularity, the sentiment expressed in newspaper reports on the cryptocurrency, and total number of transactions. The paper also reports on the first global survey of merchants who have adopted this technology and model the share of sales paid for with this alternative currency, using both ordinary and Tobit regressions. Our analysis examines how country, customer and company-specific characteristics interact with the proportion of sales attributed to Bitcoin. We find that company features, use of other payment methods, customers’ knowledge about Bitcoin, as well as the size of both the official and unofficial economy are significant determinants. The results presented allow a better understanding of the practical and theoretical ramifications of this innovation.", "title": "" }, { "docid": "1c079b53b0967144a183f65a16e10158", "text": "Android has provided dynamic code loading (DCL) since API level one. DCL allows an app developer to load additional code at runtime. DCL raises numerous challenges with regards to security and accountability analysis of apps. While previous studies have investigated DCL on Android, in this paper we formulate and answer three critical questions that are missing from previous studies: (1) Where does the loaded code come from (remotely fetched or locally packaged), and who is the responsible entity to invoke its functionality? (2) In what ways is DCL utilized to harden mobile apps, specifically, application obfuscation? (3) What are the security risks and implications that can be found from DCL in off-the-shelf apps? We design and implement DYDROID, a system which uses both dynamic and static analysis to analyze dynamically loaded code. Dynamic analysis is used to automatically exercise apps, capture DCL behavior, and intercept the loaded code. Static analysis is used to investigate malicious behavior and privacy leakage in that dynamically loaded code. We have used DYDROID to analyze over 46K apps with little manual intervention, allowing us to conduct a large-scale measurement to investigate five aspects of DCL, such as source identification, malware detection, vulnerability analysis, obfuscation analysis, and privacy tracking analysis. We have several interesting findings. (1) 27 apps are found to violate the content policy of Google Play by executing code downloaded from remote servers. (2) We determine the distribution, pros/cons, and implications of several common obfuscation methods, including DEX encryption/loading. (3) DCL’s stealthiness enables it to be a channel to deploy malware, and we find 87 apps loading malicious binaries which are not detected by existing antivirus tools. (4) We found 14 apps that are vulnerable to code injection attacks due to dynamically loading code which is writable by other apps. (5) DCL is mainly used by third-party SDKs, meaning that app developers may not know what sort of sensitive functionality is injected into their apps.", "title": "" }, { "docid": "f5658fe48ecc31e72fbfbcb12f843a44", "text": "PURPOSE OF REVIEW\nThe current review discusses the integration of guideline and evidence-based palliative care into heart failure end-of-life (EOL) care.\n\n\nRECENT FINDINGS\nNorth American and European heart failure societies recommend the integration of palliative care into heart failure programs. Advance care planning, shared decision-making, routine measurement of symptoms and quality of life and specialist palliative care at heart failure EOL are identified as key components to an effective heart failure palliative care program. There is limited evidence to support the effectiveness of the individual elements. However, results from the palliative care in heart failure trial suggest an integrated heart failure palliative care program can significantly improve quality of life for heart failure patients at EOL.\n\n\nSUMMARY\nIntegration of a palliative approach to heart failure EOL care helps to ensure patients receive the care that is congruent with their values, wishes and preferences. Specialist palliative care referrals are limited to those who are truly at heart failure EOL.", "title": "" }, { "docid": "c88f3c3b6bf8ad80b20216caf1a7cad6", "text": "This study examined the effects of heavy resistance training on physiological acute exercise-induced fatigue (5 × 10 RM leg press) changes after two loading protocols with the same relative intensity (%) (5 × 10 RMRel) and the same absolute load (kg) (5 × 10 RMAbs) as in pretraining in men (n = 12). Exercise-induced neuromuscular (maximal strength and muscle power output), acute cytokine and hormonal adaptations (i.e., total and free testosterone, cortisol, growth hormone (GH), insulin-like growth factor-1 (IGF-1), IGF binding protein-3 (IGFBP-3), interleukin-1 receptor antagonist (IL-1ra), IL-1β, IL-6, and IL-10 and metabolic responses (i.e., blood lactate) were measured before and after exercise. The resistance training induced similar acute responses in serum cortisol concentration but increased responses in anabolic hormones of FT and GH, as well as inflammation-responsive cytokine IL-6 and the anti-inflammatory cytokine IL-10, when the same relative load was used. This response was balanced by a higher release of pro-inflammatory cytokines IL-1β and cytokine inhibitors (IL-1ra) when both the same relative and absolute load was used after training. This enhanced hormonal and cytokine response to strength exercise at a given relative exercise intensity after strength training occurred with greater accumulated fatigue and metabolic demand (i.e., blood lactate accumulation). The magnitude of metabolic demand or the fatigue experienced during the resistance exercise session influences the hormonal and cytokine response patterns. Similar relative intensities may elicit not only higher exercise-induced fatigue but also an increased acute hormonal and cytokine response during the initial phase of a resistance training period.", "title": "" }, { "docid": "f4535d47191caaa1e830e5d8fae6e1ba", "text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.", "title": "" }, { "docid": "da9a6e165744245fd19ab788790c37c9", "text": "Worldwide medicinal use of cannabis is rapidly escalating, despite limited evidence of its efficacy from preclinical and clinical studies. Here we show that cannabidiol (CBD) effectively reduced seizures and autistic-like social deficits in a well-validated mouse genetic model of Dravet syndrome (DS), a severe childhood epilepsy disorder caused by loss-of-function mutations in the brain voltage-gated sodium channel NaV1.1. The duration and severity of thermally induced seizures and the frequency of spontaneous seizures were substantially decreased. Treatment with lower doses of CBD also improved autistic-like social interaction deficits in DS mice. Phenotypic rescue was associated with restoration of the excitability of inhibitory interneurons in the hippocampal dentate gyrus, an important area for seizure propagation. Reduced excitability of dentate granule neurons in response to strong depolarizing stimuli was also observed. The beneficial effects of CBD on inhibitory neurotransmission were mimicked and occluded by an antagonist of GPR55, suggesting that therapeutic effects of CBD are mediated through this lipid-activated G protein-coupled receptor. Our results provide critical preclinical evidence supporting treatment of epilepsy and autistic-like behaviors linked to DS with CBD. We also introduce antagonism of GPR55 as a potential therapeutic approach by illustrating its beneficial effects in DS mice. Our study provides essential preclinical evidence needed to build a sound scientific basis for increased medicinal use of CBD.", "title": "" }, { "docid": "d6cb714b47b056e1aea8ef0682f4ae51", "text": "Arti cial neural networks are being used with increasing frequency for high dimensional problems of regression or classi cation. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques.", "title": "" } ]
scidocsrr
2a3d81dcfe9827429ff879c5242e12e5
Vital Sign Monitoring Through the Back Using an UWB Impulse Radar With Body Coupled Antennas
[ { "docid": "c70e11160c90bd67caa2294c499be711", "text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.", "title": "" } ]
[ { "docid": "1d7035cc5b85e13be6ff932d39740904", "text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor", "title": "" }, { "docid": "c55cf6c871a681cad112cb9c664a1928", "text": "Splitting of the behavioural activity phase has been found in nocturnal rodents with suprachiasmatic nucleus (SCN) coupling disorder. A similar phenomenon was observed in the sleep phase in the diurnal human discussed here, suggesting that there are so-called evening and morning oscillators in the SCN of humans. The present case suffered from bipolar disorder refractory to various treatments, and various circadian rhythm sleep disorders, such as delayed sleep phase, polyphasic sleep, separation of the sleep bout resembling splitting and circabidian rhythm (48 h), were found during prolonged depressive episodes with hypersomnia. Separation of sleep into evening and morning components and delayed sleep-offset (24.69-h cycle) developed when lowering and stopping the dose of aripiprazole (APZ). However, resumption of APZ improved these symptoms in 2 weeks, accompanied by improvement in the patient's depressive state. Administration of APZ may improve various circadian rhythm sleep disorders, as well as improve and prevent manic-depressive episodes, via augmentation of coupling in the SCN network.", "title": "" }, { "docid": "c83456247c28dd7824e9611f3c59167d", "text": "In this paper, we present a carry skip adder (CSKA) structure that has a higher speed yet lower energy consumption compared with the conventional one. The speed enhancement is achieved by applying concatenation and incrementation schemes to improve the efficiency of the conventional CSKA (Conv-CSKA) structure. In addition, instead of utilizing multiplexer logic, the proposed structure makes use of AND-OR-Invert (AOI) and OR-AND-Invert (OAI) compound gates for the skip logic. The structure may be realized with both fixed stage size and variable stage size styles, wherein the latter further improves the speed and energy parameters of the adder. Finally, a hybrid variable latency extension of the proposed structure, which lowers the power consumption without considerably impacting the speed, is presented. This extension utilizes a modified parallel structure for increasing the slack time, and hence, enabling further voltage reduction. The proposed structures are assessed by comparing their speed, power, and energy parameters with those of other adders using a 45-nm static CMOS technology for a wide range of supply voltages. The results that are obtained using HSPICE simulations reveal, on average, 44% and 38% improvements in the delay and energy, respectively, compared with those of the Conv-CSKA. In addition, the power-delay product was the lowest among the structures considered in this paper, while its energy-delay product was almost the same as that of the Kogge-Stone parallel prefix adder with considerably smaller area and power consumption. Simulations on the proposed hybrid variable latency CSKA reveal reduction in the power consumption compared with the latest works in this field while having a reasonably high speed.", "title": "" }, { "docid": "19443768282cf17805e70ac83288d303", "text": "Interactive narrative is a form of storytelling in which users affect a dramatic storyline through actions by assuming the role of characters in a virtual world. This extended abstract outlines the SCHEHERAZADE-IF system, which uses crowdsourcing and artificial intelligence to automatically construct text-based interactive narrative experiences.", "title": "" }, { "docid": "cace842a0c5507ae447e5009fb160592", "text": "UNLABELLED\nDue to the localized surface plasmon (LSP) effect induced by Ag nanoparticles inside black silicon, the optical absorption of black silicon is enhanced dramatically in near-infrared range (1,100 to 2,500 nm). The black silicon with Ag nanoparticles shows much higher absorption than black silicon fabricated by chemical etching or reactive ion etching over ultraviolet to near-infrared (UV-VIS-NIR, 250 to 2,500 nm). The maximum absorption even increased up to 93.6% in the NIR range (820 to 2,500 nm). The high absorption in NIR range makes LSP-enhanced black silicon a potential material used for NIR-sensitive optoelectronic device.\n\n\nPACS\n78.67.Bf; 78.30.Fs; 78.40.-q; 42.70.Gi.", "title": "" }, { "docid": "7db4066e2e6faabe0dfd815cd5b1d66e", "text": "The observed poor quality of graduates of some Nigerian Universities in recent times has been partly traced to inadequacies of the National University Admission Examination System. In this study an Artificial Neural Network (ANN) model, for predicting the likely performance of a candidate being considered for admission into the university was developed and tested. Various factors that may likely influence the performance of a student were identified. Such factors as ordinary level subjects’ scores and subjects’ combination, matriculation examination scores, age on admission, parental background, types and location of secondary school attended and gender, among others, were then used as input variables for the ANN model. A model based on the Multilayer Perceptron Topology was developed and trained using data spanning five generations of graduates from an Engineering Department of University of Ibadan, Nigeria’s first University. Test data evaluation shows that the ANN model is able to correctly predict the performance of more than 70% of prospective students. (", "title": "" }, { "docid": "f7d023abf0f651177497ae38d8494efc", "text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.", "title": "" }, { "docid": "db5157c6682f281fb0f8ad1285646042", "text": "There are currently very few practical methods for assessin g the quality of resources or the reliability of other entities in the o nline environment. This makes it difficult to make decisions about which resources ca n be relied upon and which entities it is safe to interact with. Trust and repu tation systems are aimed at solving this problem by enabling service consumers to eliably assess the quality of services and the reliability of entities befo r they decide to use a particular service or to interact with or depend on a given en tity. Such systems should also allow serious service providers and online play ers to correctly represent the reliability of themselves and the quality of thei r s rvices. In the case of reputation systems, the basic idea is to let parties rate e ch other, for example after the completion of a transaction, and use the aggreg ated ratings about a given party to derive its reputation score. In the case of tru st systems, the basic idea is to analyse and combine paths and networks of trust rel ationships in order to derive measures of trustworthiness of specific nodes. Rep utation scores and trust measures can assist other parties in deciding whether or not to transact with a given party in the future, and whether it is safe to depend on a given resource or entity. This represents an incentive for good behaviour and for offering reliable resources, which thereby tends to have a positive effect on t he quality of online markets and communities. This chapter describes the backgr ound, current status and future trend of online trust and reputation systems.", "title": "" }, { "docid": "b9a1883e48cc1651d887124a2dee3831", "text": "It is known that local filtering-based edge preserving smoothing techniques suffer from halo artifacts. In this paper, a weighted guided image filter (WGIF) is introduced by incorporating an edge-aware weighting into an existing guided image filter (GIF) to address the problem. The WGIF inherits advantages of both global and local smoothing filters in the sense that: 1) the complexity of the WGIF is O(N) for an image with N pixels, which is same as the GIF and 2) the WGIF can avoid halo artifacts like the existing global smoothing filters. The WGIF is applied for single image detail enhancement, single image haze removal, and fusion of differently exposed images. Experimental results show that the resultant algorithms produce images with better visual quality and at the same time halo artifacts can be reduced/avoided from appearing in the final images with negligible increment on running times.", "title": "" }, { "docid": "2de8df231b5af77cfd141e26fb7a3ace", "text": "A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a “prior” that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.", "title": "" }, { "docid": "e2c2cdb5245b73b7511c434c4901fff8", "text": "Adversarial machine learning in the context of image processing and related applications has received a large amount of attention. However, adversarial machine learning, especially adversarial deep learning, in the context of malware detection has received much less attention despite its apparent importance. In this paper, we present a framework for enhancing the robustness of Deep Neural Networks (DNNs) against adversarial malware samples, dubbed Hashing Transformation Deep Neural Networks (HashTran-DNN). The core idea is to use hash functions with a certain locality-preserving property to transform samples to enhance the robustness of DNNs in malware classification. The framework further uses a Denoising Auto-Encoder (DAE) regularizer to reconstruct the hash representations of samples, making the resulting DNN classifiers capable of attaining the locality information in the latent space. We experiment with two concrete instantiations of the HashTranDNN framework to classify Android malware. Experimental results show that four known attacks can render standard DNNs useless in classifying Android malware, that known defenses can at most defend three of the four attacks, and that HashTran-DNN can effectively defend against all of the four attacks.", "title": "" }, { "docid": "5cc1058a0c88ff15e2992a4d83fdbe3f", "text": "The paper presents a finite-element method-based design and analysis of interior permanent magnet synchronous motor with flux barriers (IPMSMFB). Various parameters of IPMSMFB rotor structure were taken into account at determination of a suitable rotor construction. On the basis of FEM analysis the rotor of IPMSMFB with three-flux barriers was built. Output torque capability and flux weakening performance of IPMSMFB were compared with performances of conventional interior permanent magnet synchronous motor (IPMSM), having the same rotor geometrical dimensions and the same stator construction. The predicted performance of conventional IPMSM and IPMSMFB was confirmed with the measurements over a wide-speed range of constant output power operation.", "title": "" }, { "docid": "af19c558ac6b5b286bc89634a1f05e26", "text": "The SIGIR 2016 workshop on Neural Information Retrieval (Neu-IR) took place on 21 July, 2016 in Pisa. The goal of the Neu-IR (pronounced \"New IR\") workshop was to serve as a forum for academic and industrial researchers, working at the intersection of information retrieval (IR) and machine learning, to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research. In total, 19 papers were presented, including oral and poster presentations. The workshop program also included a session on invited \"lightning talks\" to encourage participants to share personal insights and negative results with the community. The workshop was well-attended with more than 120 registrations.", "title": "" }, { "docid": "39a394f6c7f42f3a5e1451b0337584ed", "text": "Surveys throughout the world have shown consistently that persons over 65 are far less likely to be victims of crime than younger age groups. However, many elderly people are unduly fearful about crime which has an adverse effect on their quality of life. This Trends and Issues puts this matter into perspective, but also discusses the more covert phenomena of abuse and neglect of the elderly. Our senior citizens have earned the right to live in dignity and without fear: the community as a whole should contribute to this process. Duncan Chappell Director", "title": "" }, { "docid": "42f176b03faacad53ccef0b7573afdc4", "text": "Acquired upper extremity amputations beyond the finger can have substantial physical, psychological, social, and economic consequences for the patient. The hand surgeon is one of a team of specialists in the care of these patients, but the surgeon plays a critical role in the surgical management of these wounds. The execution of a successful amputation at each level of the limb allows maximum use of the residual extremity, with or without a prosthesis, and minimizes the known complications of these injuries. This article reviews current surgical options in performing and managing upper extremity amputations proximal to the finger.", "title": "" }, { "docid": "7347c844cdc0b7e4b365dafcdc9f720c", "text": "Recommender systems are widely used in online applications since they enable personalized service to the users. The underlying collaborative filtering techniques work on user’s data which are mostly privacy sensitive and can be misused by the service provider. To protect the privacy of the users, we propose to encrypt the privacy sensitive data and generate recommendations by processing them under encryption. With this approach, the service provider learns no information on any user’s preferences or the recommendations made. The proposed method is based on homomorphic encryption schemes and secure multiparty computation (MPC) techniques. The overhead of working in the encrypted domain is minimized by packing data as shown in the complexity analysis.", "title": "" }, { "docid": "545f41e1c94a3198e75801da4c39b0da", "text": "When attempting to improve the performance of a deep learning system, there are more or less three approaches one can take: the first is to improve the structure of the model, perhaps adding another layer, switching from simple recurrent units to LSTM cells [4], or–in the realm of NLP–taking advantage of syntactic parses (e.g. as in [13, et seq.]); another approach is to improve the initialization of the model, guaranteeing that the early-stage gradients have certain beneficial properties [3], or building in large amounts of sparsity [6], or taking advantage of principles of linear algebra [15]; the final approach is to try a more powerful learning algorithm, such as including a decaying sum over the previous gradients in the update [12], by dividing each parameter update by the L2 norm of the previous updates for that parameter [2], or even by foregoing first-order algorithms for more powerful but more computationally costly second order algorithms [9]. This paper has as its goal the third option—improving the quality of the final solution by using a faster, more powerful learning algorithm.", "title": "" }, { "docid": "8c80129507b138d1254e39acfa9300fc", "text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\[email protected].", "title": "" }, { "docid": "55eb5594f05319c157d71361880f1983", "text": "Following the growing share of wind energy in electric power systems, several wind power forecasting techniques have been reported in the literature in recent years. In this paper, a wind power forecasting strategy composed of a feature selection component and a forecasting engine is proposed. The feature selection component applies an irrelevancy filter and a redundancy filter to the set of candidate inputs. The forecasting engine includes a new enhanced particle swarm optimization component and a hybrid neural network. The proposed wind power forecasting strategy is applied to real-life data from wind power producers in Alberta, Canada and Oklahoma, U.S. The presented numerical results demonstrate the efficiency of the proposed strategy, compared to some other existing wind power forecasting methods.", "title": "" }, { "docid": "d7538c23aa43edce6cfde8f2125fd3bb", "text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.", "title": "" } ]
scidocsrr
c184aa2b1b955610fe4340347cfe7c8a
Botnet Research Survey
[ { "docid": "b3b27246ed1ef97fb1994b8dbaf023f3", "text": "Malicious botnets are networks of compromised computers that are controlled remotely to perform large-scale distributed denial-of-service (DDoS) attacks, send spam, trojan and phishing emails, distribute pirated media or conduct other usually illegitimate activities. This paper describes a methodology to detect, track and characterize botnets on a large Tier-1 ISP network. The approach presented here differs from previous attempts to detect botnets by employing scalable non-intrusive algorithms that analyze vast amounts of summary traffic data collected on selected network links. Our botnet analysis is performed mostly on transport layer data and thus does not depend on particular application layer information. Our algorithms produce alerts with information about controllers. Alerts are followed up with analysis of application layer data, that indicates less than 2% false positive rates.", "title": "" } ]
[ { "docid": "270319820586068f09954ec9c358232f", "text": "Recent years have seen exciting developments in join algorithms. In 2008, Atserias, Grohe and Marx (henceforth AGM) proved a tight bound on the maximum result size of a full conjunctive query, given constraints on the input rel ation sizes. In 2012, Ngo, Porat, R «e and Rudra (henceforth NPRR) devised a join algorithm with worst-case running time proportional to the AGM bound [8]. Our commercial database system LogicBlox employs a novel join algorithm, leapfrog triejoin, which compared conspicuously well to the NPRR algorithm in preliminary benchmarks. This spurred us to analyze the complexity of leapfrog triejoin. In this pa per we establish that leapfrog triejoin is also worst-case o ptimal, up to a log factor, in the sense of NPRR. We improve on the results of NPRR by proving that leapfrog triejoin achieves worst-case optimality for finer-grained classes o f database instances, such as those defined by constraints on projection cardinalities. We show that NPRR is not worstcase optimal for such classes, giving a counterexamplewher e leapfrog triejoin runs inO(n log n) time and NPRR runs in Θ(n) time. On a practical note, leapfrog triejoin can be implemented using conventional data structures such as B-trees, and extends naturally to ∃1 queries. We believe our algorithm offers a useful addition to the existing toolbox o f join algorithms, being easy to absorb, simple to implement, and having a concise optimality proof.", "title": "" }, { "docid": "636076c522ea4ac91afbdc93d58fa287", "text": "Aspect-based opinion mining has attracted lots of attention today. In this thesis, we address the problem of product aspect rating prediction, where we would like to extract the product aspects, and predict aspect ratings simultaneously. Topic models have been widely adapted to jointly model aspects and sentiments, but existing models may not do the prediction task well due to their weakness in sentiment extraction. The sentiment topics usually do not have clear correspondence to commonly used ratings, and the model may fail to extract certain kinds of sentiments due to skewed data. To tackle this problem, we propose a sentiment-aligned topic model(SATM), where we incorporate two types of external knowledge: product-level overall rating distribution and word-level sentiment lexicon. Experiments on real dataset demonstrate that SATM is effective on product aspect rating prediction, and it achieves better performance compared to the existing approaches.", "title": "" }, { "docid": "43e8f35e57149d1441d8e75fa754549d", "text": "Software teams should follow a well defined goal and keep their work focused. Work fragmentation is bad for efficiency and quality. In this paper we empirically investigate the relationship between the fragmentation of developer contributions and the number of post-release failures. Our approach is to represent developer contributions with a developer-module network that we call contribution network. We use network centrality measures to measure the degree of fragmentation of developer contributions. Fragmentation is determined by the centrality of software modules in the contribution network. Our claim is that central software modules are more likely to be failure-prone than modules located in surrounding areas of the network. We analyze this hypothesis by exploring the network centrality of Microsoft Windows Vista binaries using several network centrality measures as well as linear and logistic regression analysis. In particular, we investigate which centrality measures are significant to predict the probability and number of post-release failures. Results of our experiments show that central modules are more failure-prone than modules located in surrounding areas of the network. Results further confirm that number of authors and number of commits are significant predictors for the probability of post-release failures. For predicting the number of post-release failures the closeness centrality measure is most significant.", "title": "" }, { "docid": "50283f1442d6e50ac6f8334ab992cbc6", "text": "The objective of ent i ty identification i s t o determine the correspondence between object instances f r o m more than one database. This paper ezamines the problem at the instance level assuming that schema level heterogeneity has been resolved a priori . Soundness and completeness are defined as the desired properties of any ent i ty identification technique. To achieve soundness, a set of ident i ty and distinctness rules are established for enti t ies in the integrated world. W e propose the use of eztended key, which i s the union of keys (and possibly other attributes) f r o m the relations t o be matched, and i t s corresponding ident i ty rule, t o determine the equivalence between tuples f r o m relations which m a y not share any common key. Instance level funct ional dependencies (ILFD), a f o r m of semantic constraint information about the real-world entities, are used t o derive the missing eztended key attribute values of a tuple.", "title": "" }, { "docid": "8b3ad3d48da22c529e65c26447265372", "text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.", "title": "" }, { "docid": "d83aa51df8fa3cc03e3ee8d5ed01851e", "text": "Because the World Wide Web consists primarily of text, information extraction is central to any e ort that would use the Web as a resource for knowledge discovery. We show how information extraction can be cast as a standard machine learning problem, and argue for the suitability of relational learning in solving it. The implementation of a general-purpose relational learner for information extraction, SRV, is described. In contrast with earlier learning systems for information extraction, SRV makes no assumptions about document structure and the kinds of information available for use in learning extraction patterns. Instead, structural and other information is supplied as input in the form of an extensible token-oriented feature set. We demonstrate the e ectiveness of this approach by adapting SRV for use in learning extraction rules for a domain consisting of university course and research project pages sampled from the Web. Making SRV Web-ready only involves adding several simple HTML-speci c features to its basic feature set. The World Wide Web, with its explosive growth and ever-broadening reach, is swiftly becoming the default knowledge resource for many areas of endeavor. Unfortunately, although any one of over 200,000,000 Web pages is readily accessible to an Internet-connected workstation, the information content of these pages is, without human interpretation, largely inaccessible. Systems have been developed which can make sense of highly regularWeb pages, such as those generated automatically from internal databases in response to user queries (Doorenbos, Etzioni, & Weld 1997) (Kushmerick 1997). A surprising number of Web sites have pages amenable to the techniques used by these systems. Still, most Web pages do not exhibit the regularity required by they require. There is a larger class of pages, however, which are regular in a more abstract sense. ManyWeb pages come from collections in which each page describes a single entity or event (e.g., home pages in a CS department; each describes its owner). The purpose of such a page is often to convey essential facts about the entity it Copyright c 1998, American Association for Arti cial Intelligence (www.aaai.org). All rights reserved. describes. It is often reasonable to approach such a page with a set of standard questions, and to expect that the answers to these questions will be available as succinct text fragments in the page. A home page, for example, frequently lists the owner's name, a liations, email address, etc. The problem of identifying the text fragments that answer standard questions de ned for a document collection is called information extraction (IE) (Def 1995). Our interest in IE concerns the development of machine learning methods to solve it. We regard IE as a kind of text classi cation, which has strong a nities with the well-investigated problem of document classi cation, but also presents unique challenges. We share this focus with a number of other recent systems (Soderland 1996) (Cali & Mooney 1997), including a system designed to learn how to extract from HTML (Soderland 1997). In this paper we describe SRV, a top-down relational algorithm for information extraction. Central to the design of SRV is its reliance on a set of token-oriented features, which are easy to implement and add to the system. Since domain-speci c information is contained within this features, which are separate from the core algorithm, SRV is better poised than similar systems for targeting to new domains. We have used it to perform extraction from electronic seminar announcements, medical abstracts, and newswire articles on corporate acquisitions. The experiments reported here show that targeting the system to HTML involves nothing more than the addition of HTML-speci c features to its basic feature set. Learning for Information Extraction Consider a collection of Web pages describing university computer science courses. Given a page, a likely task for an information extraction system is to nd the title of the course the page describes. We call the title a eld and any literal title taken from an actual page, such as \\Introduction to Arti cial Intelligence,\" an instantiation or instance of the title eld. Note that the typical information extraction problem involves multiple elds, some of which may have multiple instantiations in a given le. For example, a course page might From: AAAI-98 Proceedings. Copyright © 1998, AAAI (www.aaai.org). All rights reserved.", "title": "" }, { "docid": "d07a10da23e0fc18b473f8a30adaebfb", "text": "DATA FLOW IS A POPULAR COMPUTATIONAL MODEL for visual programming languages. Data flow provides a view of computation which shows the data flowing from one filter function to another, being transformed as it goes. In addition, the data flow model easily accomodates the insertion of viewing monitors at various points to show the data to the user. Consequently, many recent visual programming languages are based on the data flow model. This paper describes many of the data flow visual programming languages. The languages are grouped according to their application domain. For each language, pertinent aspects of its appearance, and the particular design alternatives it uses, are discussed. Next, some strengths of data flow visual programming languages are mentioned. Finally, unsolved problems in the design of such languages are discussed.", "title": "" }, { "docid": "1db72cafa214f41b5b6faa3a3c0c8be0", "text": "Multiple-antenna receivers offer numerous advantages over single-antenna receivers, including sensitivity improvement, ability to reject interferers spatially and enhancement of data-rate or link reliability via MIMO. In the recent past, RF/analog phased-array receivers have been investigated [1-4]. On the other hand, digital beamforming offers far greater flexibility, including ability to form multiple simultaneous beams, ease of digital array calibration and support for MIMO. However, ADC dynamic range is challenged due to the absence of spatial interference rejection at RF/analog.", "title": "" }, { "docid": "3ebc26643334c88ccc44fb01f60d600f", "text": "Skin whitening products are commercially available for cosmetic purposes in order to obtain a lighter skin appearance. They are also utilized for clinical treatment of pigmentary disorders such as melasma or postinflammatory hyperpigmentation. Whitening agents act at various levels of melanin production in the skin. Many of them are known as competitive inhibitors of tyrosinase, the key enzyme in melanogenesis. Others inhibit the maturation of this enzyme or the transport of pigment granules (melanosomes) from melanocytes to surrounding keratinocytes. In this review we present an overview of (natural) whitening products that may decrease skin pigmentation by their interference with the pigmentary processes.", "title": "" }, { "docid": "ac94c03a72607f76e53ae0143349fff3", "text": "Abrlracr-A h u l a for the cppecity et arbitrary sbgle-wer chrurwla without feedback (mot neccgdueily Wium\" stable, stationary, etc.) is proved. Capacity ie shown to e i p l the supremum, over all input processts, & the input-outpat infiqjknda QBnd as the llnainl ia praabiutJr d the normalized information density. The key to thir zbllljt is a ntw a\"c sppmrh bosed 811 a Ampie II(A Lenar trwrd eu the pralwbility of m-4v hgpothesb t#tcl UIOlls eq*rdIaN <hypotheses. A neassruy and d c i e n t coadition Eor the validity of the strong comeme is given, as well as g\"l expressions for eeapacity.", "title": "" }, { "docid": "666d52dd68c088f7274a3789f8b78b78", "text": "Intermediate and higher vision processes require selection of a subset of the available sensory information before further processing. Usually, this selection is implemented in the form of a spatially circumscribed region of the visual field, the so-called \"focus of attention\" which scans the visual scene dependent on the input and on the attentional state of the subject. We here present a model for the control of the focus of attention in primates, based on a saliency map. This mechanism is not only expected to model the functionality of biological vision but also to be essential for the understanding of complex scenes in machine vision.", "title": "" }, { "docid": "5944791613da6b94a09560dbf8f54c38", "text": "In this paper we introduce a friction observer for robots with joint torque sensing (in particular for the DLR medical robot) in order to increase the positioning accuracy and the performance of torque control. The observer output corresponds to the low-pass filtered friction torque. It is used for friction compensation in conjunction with a MIMO controller designed for flexible joint arms. A passivity analysis is done for this friction compensation, allowing a Lyapunov based convergence analysis in the context of the nonlinear robot dynamics. For the complete controlled system, global asymptotic stability can be shown. Experimental results validate the practical efficiency of the approach.", "title": "" }, { "docid": "61126d2dc5dd6e8130dd0d6a0dc45774", "text": "Over the last decade or so, it has become increasingly clear to many cognitive scientists that research into human language (and cognition in general, for that matter) has largely neglected how language and thought are embedded in the body and the world. As argued by, for instance, Clark (1997), cognition is fundamentally embodied, that is, it can only be studied in relation to human action, perception, thought, and experience. As Feldman puts it: \" Human language and thought are crucially shaped by the properties of our bodies and the structure of our physical and social environment. Language and thought are not best studied as formal mathematics and logic, but as adaptations that enable creatures like us to thrive in a wide range of situations \" (p. 7). Although it may seem paradoxical to try formalizing this view in a computational theory of language comprehension, this is exactly what From Molecule to Metaphor does. Starting from the assumption that human thought is neural computation, Feldman develops a computational theory that takes the embodied nature of language into account: the neural theory of language. The book comprises 27 short chapters, distributed over nine parts. Part I presents the basic ideas behind embodied language and cognition and explains how the embodiment of language is apparent in the brain: The neural circuits involved in a particular experience or action are, for a large part, the same circuits involved in processing language about this experience or action. Part II discusses neural computation, starting from the molecules that take part in information processing by neurons. This detailed exposition is followed by a description of neuronal networks in the human body, in particular in the brain. The description of the neural theory of language begins in Part III, where it is explained how localist neural networks, often used as psycholinguistic models, can represent the meaning of concepts. This is done by introducing triangle nodes into the network. Each triangle node connects the nodes representing a concept, a role, and a filler—for example, \" pea, \" \" has-color, \" and \" green. \" Such networks are trained by a process called recruitment learning, which is described only very informally. This is certainly an interesting idea for combining propositional and connectionist models, but it does leave the reader with a number of questions. For instance, how is the concept distinguished from the filler when they can be interchanged, as …", "title": "" }, { "docid": "2172e78731ee63be5c15549e38c4babb", "text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.", "title": "" }, { "docid": "89a73876c24508d92050f2055292d641", "text": "We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small \"sketch\" for each node in the graph, and at query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance.", "title": "" }, { "docid": "7e7ba0025d19a0eb73c22ceb1eaddcee", "text": "This is a landmark book. For anyone interested in language, in dictionaries and thesauri, or natural language processing, the introduction, Chapters 14, and Chapter 16 are must reading. (Select other chapters according to your special interests; see the chapter-by-chapter review). These chapters provide a thorough introduction to the preeminent electronic lexical database of today in terms of accessibility and usage in a wide range of applications. But what does that have to do with digital libraries? Natural language processing is essential for dealing efficiently with the large quantities of text now available online: fact extraction and summarization, automated indexing and text categorization, and machine translation. Another essential function is helping the user with query formulation through synonym relationships between words and hierarchical and other relationships between concepts. WordNet supports both of these functions and thus deserves careful study by the digital library community.", "title": "" }, { "docid": "d50d3997572847200f12d69f61224760", "text": "The main function of a network layer is to route packets from the source machine to the destination machine. Algorithms that are used for route selection and data structure are the main parts for the network layer. In this paper we examine the network performance when using three routing protocols, RIP, OSPF and EIGRP. Video, HTTP and Voice application where configured for network transfer. We also examine the behaviour when using link failure/recovery controller between network nodes. The simulation results are analyzed, with a comparison between these protocols on the effectiveness and performance in network implemented.", "title": "" }, { "docid": "bba4d637cf40e81ea89e61e875d3c425", "text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.", "title": "" }, { "docid": "7a1a9ed8e9a6206c3eaf20da0c156c14", "text": "Formal modeling rules can be used to ensure that an enterprise architecture is correct. Despite their apparent utility and despite mature tool support, formal modelling rules are rarely, if ever, used in practice in enterprise architecture in industry. In this paper we propose a rule authoring method that we believe aligns with actual modelling practice, at least as witnessed in enterprise architecture projects at the Swedish Defence Materiel Administration. The proposed method follows the business rules approach: the rules are specified in a (controlled) natural language which makes them accessible to all stakeholders and easy to modify as the meta-model matures and evolves over time. The method was put to test during 2014 in two large scale enterprise architecture projects, and we report on the experiences from that. To the best of our knowledge, this is the first time extensive formal modelling rules for enterprise architecture has been tested in industry and reported in the", "title": "" }, { "docid": "88dd795c6d1fa37c13fbf086c0eb0e37", "text": "We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ℓ1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.", "title": "" } ]
scidocsrr
8682983d0f8b0c24bec9756a7d875b17
Relative localization and communication module for small-scale multi-robot systems
[ { "docid": "1e6cec12054c46442819f9595d07ae09", "text": "Most of the research in the field of robotics is focussed on solving the problem of Simultaneous Localization and Mapping(SLAM). In general the problem is solved using a single robot. In the article written by R. Grabowski, C. Paredis and P. Hkosla, called ”Heterogeneous Teams of Modular Robots for Mapping end Exploration” a novel localization method is presented based on multiple robots.[Grabowski, 2000] For this purpose the relative distance between the different robots is calculated. These measurements, together with the positions estimated using dead reckoning, are used to determine the most likely new positions of the agents. Knowing the positions is essential when pursuing accurate (team) mapping capabilities. The proposed method makes it possible for heterogeneous team of modular centimeter-scale robots to collaborate and map unexplored environments.", "title": "" } ]
[ { "docid": "5e3575b45ffaeb2587d7e6531609bd1c", "text": "These last years, several new home automation boxes appeared on the market, the new radio-based protocols facilitating their deployment with respect to previously wired solutions. Coupled with the wider availability of connected objects, these protocols have allowed new users to set up home automation systems by themselves. In this paper, we relate an in situ observational study of these builders in order to understand why and how the smart habitats were developed and used. We led 10 semi-structured interviews in households composed of at least 2 adults and equipped for at least 1 year, and 47 home automation builders answered an online questionnaire at the end of the study. Our study confirms, specifies and exhibits additional insights about usages and means of end-user development in the context of home automation.", "title": "" }, { "docid": "fa05d004df469e8f83fa4fdee9909a6f", "text": "Accurate velocity estimation is an important basis for robot control, but especially challenging for highly elastically driven robots. These robots show large swing or oscillation effects if they are not damped appropriately during the performed motion. In this letter, we consider an ultralightweight tendon-driven series elastic robot arm equipped with low-resolution joint position encoders. We propose an adaptive Kalman filter for velocity estimation that is suitable for these kinds of robots with a large range of possible velocities and oscillation frequencies. Based on an analysis of the parameter characteristics of the measurement noise variance, an update rule based on the filter position error is developed that is easy to adjust for use with different sensors. Evaluation of the filter both in simulation and in robot experiments shows a smooth and accurate performance, well suited for control purposes.", "title": "" }, { "docid": "d52bfde050e6535645c324e7006a50e7", "text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.", "title": "" }, { "docid": "baefc6e7e7968651f3e36acfd62b094d", "text": "The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language—words, phrases, and sentences—is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.", "title": "" }, { "docid": "c7c63f08639660f935744309350ab1e0", "text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.", "title": "" }, { "docid": "b5bb280c7ce802143a86b9261767d9a6", "text": "Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks—pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a largescale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes1. We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.", "title": "" }, { "docid": "0195e112c19f512b7de6a7f00e9f1099", "text": "Medication-related osteonecrosis of the jaw (MRONJ) is a severe adverse drug reaction, consisting of progressive bone destruction in the maxillofacial region of patients. ONJ can be caused by two pharmacological agents: Antiresorptive (including bisphosphonates (BPs) and receptor activator of nuclear factor kappa-B ligand inhibitors) and antiangiogenic. MRONJ pathophysiology is not completely elucidated. There are several suggested hypothesis that could explain its unique localization to the jaws: Inflammation or infection, microtrauma, altered bone remodeling or over suppression of bone resorption, angiogenesis inhibition, soft tissue BPs toxicity, peculiar biofilm of the oral cavity, terminal vascularization of the mandible, suppression of immunity, or Vitamin D deficiency. Dental screening and adequate treatment are fundamental to reduce the risk of osteonecrosis in patients under antiresorptive or antiangiogenic therapy, or before initiating the administration. The treatment of MRONJ is generally difficult and the optimal therapy strategy is still to be established. For this reason, prevention is even more important. It is suggested that a multidisciplinary team approach including a dentist, an oncologist, and a maxillofacial surgeon to evaluate and decide the best therapy for the patient. The choice between a conservative treatment and surgery is not easy, and it should be made on a case by case basis. However, the initial approach should be as conservative as possible. The most important goals of treatment for patients with established MRONJ are primarily the control of infection, bone necrosis progression, and pain. The aim of this paper is to represent the current knowledge about MRONJ, its preventive measures and management strategies.", "title": "" }, { "docid": "799bc245ecfabf59416432ab62fe9320", "text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.", "title": "" }, { "docid": "3e142a338a98e3a3c9a65fea07473cf8", "text": "In this paper we develop a new family of Ordered Weighted Averaging (OWA) operators. Weight vector is obtained from a desired orness of the operator. Using Faulhaber’s formulas we obtain direct and simple expressions for the weight vector without any iteration loop. With the exception of one weight, the remaining follow a straight line relation. As a result, a fast and robust algorithm is developed. The resulting weight vector is suboptimal according with the Maximum Entropy criterion, but it is very close to the optimal. Comparisons are done with other procedures.", "title": "" }, { "docid": "122ed18a623510052664996c7ef4b4bb", "text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding", "title": "" }, { "docid": "914f41b9f3c0d74f888c7dd83e226468", "text": "We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users.", "title": "" }, { "docid": "6db790d4d765b682fab6270c5930bead", "text": "Geophysical applications of radar interferometry to measure changes in the Earth's surface have exploded in the early 1990s. This new geodetic technique calculates the interference pattern caused by the difference in phase between two images acquired by a spaceborne synthetic aperture radar at two distinct times. The resulting interferogram is a contour map of the change in distance between the ground and the radar instrument. These maps provide an unsurpassed spatial sampling density (---100 pixels km-2), a competitive precision (---1 cm), and a useful observation cadence (1 pass month-•). They record movements in the crust, perturbations in the atmosphere, dielectric modifications in the soil, and relief in the topography. They are also sensitive to technical effects, such as relative variations in the radar's trajectory or variations in its frequency standard. We describe how all these phenomena contribute to an interferogram. Then a practical summary explains the techniques for calculating and manipulating interferograms from various radar instruments, including the four satellites currently in orbit: ERS-1, ERS-2, JERS-1, and RADARSAT. The next chapter suggests some guidelines for interpreting an interferogram as a geophysical measurement: respecting the limits of the technique, assessing its uncertainty, recognizing artifacts, and discriminating different types of signal. We then review the geophysical applications published to date, most of which study deformation related to earthquakes, volcanoes, and glaciers using ERS-1 data. We also show examples of monitoring natural hazards and environmental alterations related to landslides, subsidence, and agriculture. In addition, we consider subtler geophysical signals such as postseismic relaxation, tidal loading of coastal areas, and interseismic strain accumulation. We conclude with our perspectives on the future of radar interferometry. The objective of the review is for the reader to develop the physical understanding necessary to calculate an interferogram and the geophysical intuition necessary to interpret it.", "title": "" }, { "docid": "03dcb05a6aa763b6b0a5cdc58ddb81d8", "text": "In this paper, a phase-shifted dual H-bridge converter, which can solve the drawbacks of existing phase-shifted full-bridge converters such as narrow zero-voltage-switching (ZVS) range, large circulating current, large duty-cycle loss, and serious secondary-voltage overshoot and oscillation, is analyzed and evaluated. The proposed topology is composed of two symmetric half-bridge inverters that are placed in parallel on the primary side and are driven in a phase-shifting manner to regulate the output voltage. At the rectifier stage, a center-tap-type rectifier with two additional low-current-rated diodes is employed. This structure allows the proposed converter to have the advantages of a wide ZVS range, no problems related to duty-cycle loss, no circulating current, and the reduction of secondary-voltage oscillation and overshoot. Moreover, the output filter's size becomes smaller compared to the conventional phase-shift full-bridge converters. This paper describes the operation principle of the proposed converter and the analysis and design consideration in depth. A 1-kW 320-385-V input 50-V output laboratory prototype operating at a 100-kHz switching frequency is designed, built, and tested to verify the effectiveness of the presented converter.", "title": "" }, { "docid": "39fc05dfc0faeb47728b31b6053c040a", "text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.", "title": "" }, { "docid": "b17f5cfea81608e5034121113dbc8de4", "text": "Every question asked by a therapist may be seen to embody some intent and to arise from certain assumptions. Many questions are intended to orient the therapist to the client's situation and experiences; others are asked primarily to provoke therapeutic change. Some questions are based on lineal assumptions about the phenomena being addressed; others are based on circular assumptions. The differences among these questions are not trivial. They tend to have dissimilar effects. This article explores these issues and offers a framework for distinguishing four major groups of questions. The framework may be used by therapists to guide their decision making about what kinds of questions to ask, and by researchers to study different interviewing styles.", "title": "" }, { "docid": "a520bf66f1b54a7444f2cbe3f2da8000", "text": "In this work we study the problem of Intrusion Detection is sensor networks and we propose a lightweight scheme that can be applied to such networks. Its basic characteristic is that nodes monitor their neighborhood and collaborate with their nearest neighbors to bring the network back to its normal operational condition. We emphasize in a distributed approach in which, even though nodes don’t have a global view, they can still detect an intrusion and produce an alert. We apply our design principles for the blackhole and selective forwarding attacks by defining appropriate rules that characterize malicious behavior. We also experimentally evaluate our scheme to demonstrate its effectiveness in detecting the afore-mentioned attacks.", "title": "" }, { "docid": "b206a5f5459924381ef6c46f692c7052", "text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.", "title": "" }, { "docid": "79b73417f1f09e6487ea0c9ead28098b", "text": "The internet connectivity of client software (e.g., apps running on phones and PCs), web sites, and online services provide an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called A/B tests, split tests, randomized experiments, control/treatment tests, and online field experiments. Unlike most data mining techniques for finding correlational patterns, controlled experiments allow establishing a causal relationship with high probability. Experimenters can utilize the Scientific Method to form a hypothesis of the form “If a specific change is introduced, will it improve key metrics?” and evaluate it with real users. The theory of a controlled experiment dates back to Sir Ronald A. Fisher’s experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, and the topic of offline experiments is well developed in Statistics (Box 2005). Online Controlled Experiments started to be used in the late 1990s with the growth of the Internet. Today, many large sites, including Amazon, Bing, Facebook, Google, LinkedIn, and Yahoo! run thousands to tens of thousands of experiments each year testing user interface (UI) changes, enhancements to algorithms (search, ads, personalization, recommendation, etc.), changes to apps, content management system, etc. Online controlled experiments are now considered an indispensable tool, and their use is growing for startups and smaller websites. Controlled experiments are especially useful in combination with Agile software development (Martin 2008, Rubin 2012), Steve Blank’s Customer Development process (Blank 2005), and MVPs (Minimum Viable Products) popularized by Eric Ries’s Lean Startup (Ries 2011). Motivation and Background Many good resources are available with motivation and explanations about online controlled experiments (Siroker and Koomen 2013, Goward 2012, McFarland 2012, Schrage 2014, Kohavi, Longbotham and Sommerfield, et al. 2009, Kohavi, Deng and Longbotham, et al. 2014, Kohavi, Deng and Frasca, et al. 2013).", "title": "" }, { "docid": "c27e6b7be1a5d00632bbbea64b2516ad", "text": "Block diagonalization (BD) is a well-known precoding method in multiuser multi-input multi-output (MIMO) broadcast channels. This scheme can be considered as a extension of the zero-forcing (ZF) channel inversion to the case where each receiver is equipped with multiple antennas. One of the limitation of the BD is that the sum rate does not grow linearly with the number of users and transmit antennas at low and medium signal-to-noise ratio regime, since the complete suppression of multi-user interference is achieved at the expense of noise enhancement. Also it performs poorly under imperfect channel state information. In this paper, we propose a generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems. We first introduce a generalized ZF channel inversion algorithm as a new approach of the conventional BD. Applying this idea to the MMSE channel inversion for identifying orthonormal basis vectors of the precoder, and employing the MMSE criterion for finding its combining matrix, the proposed scheme increases the signal-to-interference-plus-noise ratio at each user's receiver. Simulation results confirm that the proposed scheme exhibits a linear growth of the sum rate, as opposed to the BD scheme. For block fading channels with four transmit antennas, the proposed scheme provides a 3 dB gain over the conventional BD scheme at 1% frame error rate. Also, we present a modified precoding method for systems with channel estimation errors and show that the proposed algorithm is robust to channel estimation errors.", "title": "" }, { "docid": "9200498e7ef691b83bf804d4c5581ba2", "text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.", "title": "" } ]
scidocsrr
ad11557e120de6ea0d14b61f7169719b
Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation
[ { "docid": "6298ab25b566616b0f3c1f6ee8889d19", "text": "This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach.", "title": "" } ]
[ { "docid": "1f355bd6b46e16c025ba72aa9250c61d", "text": "Whole-cell biosensors have several advantages for the detection of biological substances and have proven to be useful analytical tools. However, several hurdles have limited whole-cell biosensor application in the clinic, primarily their unreliable operation in complex media and low signal-to-noise ratio. We report that bacterial biosensors with genetically encoded digital amplifying genetic switches can detect clinically relevant biomarkers in human urine and serum. These bactosensors perform signal digitization and amplification, multiplexed signal processing with the use of Boolean logic gates, and data storage. In addition, we provide a framework with which to quantify whole-cell biosensor robustness in clinical samples together with a method for easily reprogramming the sensor module for distinct medical detection agendas. Last, we demonstrate that bactosensors can be used to detect pathological glycosuria in urine from diabetic patients. These next-generation whole-cell biosensors with improved computing and amplification capacity could meet clinical requirements and should enable new approaches for medical diagnosis.", "title": "" }, { "docid": "36da2b6102762c80b3ae8068d764e220", "text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move", "title": "" }, { "docid": "8e65001ed1e4a3994a95df2626ff4d89", "text": "The most popular metric distance used in iris code matching is Hamming distance. In this paper, we improve the performance of iris code matching stage by applying adaptive Hamming distance. Proposed method works with Hamming subsets with adaptive length. Based on density of masked bits in the Hamming subset, each subset is able to expand and adjoin to the right or left neighbouring bits. The adaptive behaviour of Hamming subsets increases the accuracy of Hamming distance computation and improves the performance of iris code matching. Results of applying proposed method on Chinese Academy of Science Institute of Automation, CASIA V3.3 shows performance of 99.96% and false rejection rate 0.06.", "title": "" }, { "docid": "868fe4091a136f16f6844e8739b65902", "text": "This paper uses an ant colony meta-heuristic optimization method to solve the redundancy allocation problem (RAP). The RAP is a well known NP-hard problem which has been the subject of much prior work, generally in a restricted form where each subsystem must consist of identical components in parallel to make computations tractable. Meta-heuristic methods overcome this limitation, and offer a practical way to solve large instances of the relaxed RAP where different components can be placed in parallel. The ant colony method has not yet been used in reliability design, yet it is a method that is expressly designed for combinatorial problems with a neighborhood structure, as in the case of the RAP. An ant colony optimization algorithm for the RAP is devised & tested on a well-known suite of problems from the literature. It is shown that the ant colony method performs with little variability over problem instance or random number seed. It is competitive with the best-known heuristics for redundancy allocation.", "title": "" }, { "docid": "ef3ac22e7d791113d08fd778a79008c3", "text": "Great efforts have been dedicated to harvesting knowledge bases from online encyclopedias. These knowledge bases play important roles in enabling machines to understand texts. However, most current knowledge bases are in English and non-English knowledge bases, especially Chinese ones, are still very rare. Many previous systems that extract knowledge from online encyclopedias, although are applicable for building a Chinese knowledge base, still suffer from two challenges. The first is that it requires great human efforts to construct an ontology and build a supervised knowledge extraction model. The second is that the update frequency of knowledge bases is very slow. To solve these challenges, we propose a never-ending Chinese Knowledge extraction system, CN-DBpedia, which can automatically generate a knowledge base that is of ever-increasing in size and constantly updated. Specially, we reduce the human costs by reusing the ontology of existing knowledge bases and building an end-to-end facts extraction model. We further propose a smart active update strategy to keep the freshness of our knowledge base with little human costs. The 164 million API calls of the published services justify the success of our system.", "title": "" }, { "docid": "bc4a72d96daf03f861b187fa73f57ff6", "text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.", "title": "" }, { "docid": "ad80f2e78e80397bd26dac5c0500266c", "text": "The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the eq norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.", "title": "" }, { "docid": "65a4197d7f12c320a34fdd7fcac556af", "text": "The article presents an overview of current specialized ontology engineering tools, as well as texts’ annotation tools based on ontologies. The main functions and features of these tools, their advantages and disadvantages are discussed. A systematic comparative analysis of means for engineering ontologies is presented. ACM Classification", "title": "" }, { "docid": "43a7e786704b5347f3b67c08ac9c4f70", "text": "Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.", "title": "" }, { "docid": "0d25072b941ee3e8690d9bd274623055", "text": "The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.", "title": "" }, { "docid": "bdd1c64962bfb921762259cca4a23aff", "text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.", "title": "" }, { "docid": "3072b7d80b0e9afffe6489996eca19aa", "text": "Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.", "title": "" }, { "docid": "8f1a5420deb75a2b664ceeaae8fc03f9", "text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.", "title": "" }, { "docid": "c2fc709aeb4c48a3bd2071b4693d4296", "text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.", "title": "" }, { "docid": "a17818c54117d502c696abb823ba5a6b", "text": "The next generation of multimedia services have to be optimized in a personalized way, taking user factors into account for the evaluation of individual experience. Previous works have investigated the influence of user factors mostly in a controlled laboratory environment which often includes a limited number of users and fails to reflect real-life environment. Social media, especially Facebook, provide an interesting alternative for Internet-based subjective evaluation. In this article, we develop (and open-source) a Facebook application, named YouQ1, as an experimental platform for studying individual experience for videos. Our results show that subjective experiments based on YouQ can produce reliable results as compared to a controlled laboratory experiment. Additionally, YouQ has the ability to collect user information automatically from Facebook, which can be used for modeling individual experience.", "title": "" }, { "docid": "5d80fa7763fd815e4e9530bc1a99b5d0", "text": "This paper introduces a new email dataset, consisting of both single and thread emails, manually annotated with summaries and keywords. A total of 349 emails and threads have been annotated. The dataset is our first step toward developing automatic methods for summarization and keyword extraction from emails. We describe the email corpus, along with the annotation interface, annotator guidelines, and agreement studies.", "title": "" }, { "docid": "9a4dab93461185ea98ccea7733081f73", "text": "This article discusses two standards operating on principles of cognitive radio in television white space (TV WS) frequencies 802.22and 802.11af. The comparative analysis of these systems will be presented and the similarities as well as the differences among these two perspective standards will be discussed from the point of view of physical (PHY), medium access control (MAC) and cognitive layers.", "title": "" }, { "docid": "569fed958b7a471e06ce718102687a1e", "text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.", "title": "" }, { "docid": "48a0e75b97fdaa734f033c6b7791e81f", "text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.", "title": "" }, { "docid": "cf95d41dc5a2bcc31b691c04e3fb8b96", "text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.", "title": "" } ]
scidocsrr
a54d1e9f745295cc76b789e03f97e8b6
The Demographics of Mail Search and their Application to Query Suggestion
[ { "docid": "99f93328d19ac240378c5cfe08cf9f9e", "text": "Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 \"latent\" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.", "title": "" }, { "docid": "57ba9e280303078261d4384dd9407f92", "text": "People often repeat Web searches, both to find new information on topics they have previously explored and to re-find information they have seen in the past. The query associated with a repeat search may differ from the initial query but can nonetheless lead to clicks on the same results. This paper explores repeat search behavior through the analysis of a one-year Web query log of 114 anonymous users and a separate controlled survey of an additional 119 volunteers. Our study demonstrates that as many as 40% of all queries are re-finding queries. Re-finding appears to be an important behavior for search engines to explicitly support, and we explore how this can be done. We demonstrate that changes to search engine results can hinder re-finding, and provide a way to automatically detect repeat searches and predict repeat clicks.", "title": "" } ]
[ { "docid": "cf8915016c6a6d6537fbd368238c81f3", "text": "A 5-year-old boy was followed up with migratory spermatic cord and a perineal tumour at the paediatric department after birth. He was born by Caesarean section at 38 weeks in viviparity. Weight at birth was 3650 g. Although a meningocele in the sacral region was found by MRI, there were no symptoms in particular and no other deformity was found. When he was 4 years old, he presented to our department with the perinal tumour. On examination, a slender scrotum-like tumour covering the centre of the perineal lesion, along with inflammation and ulceration around the skin of the anus, was observed. Both testes and scrotums were observed in front of the tumour (Figure 1a). An excision of the tumour and Z-plasty of the perineal lesion were performed. The subcutaneous tissue consisted of adipose tissue-like lipoma and was resected along with the tumour (Figure 1b). A Z-plasty was carefully performed in order to maintain the lefteright symmetry of the", "title": "" }, { "docid": "af9c94a8d4dcf1122f70f5d0b90a247f", "text": "New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today's large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.", "title": "" }, { "docid": "7d0ebf939deed43253d5360e325c3e8e", "text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.", "title": "" }, { "docid": "53dc606897bd6388c729cc8138027b31", "text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.", "title": "" }, { "docid": "b1e4fb97e4b1d31e4064f174e50f17d3", "text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.", "title": "" }, { "docid": "58d19a5460ce1f830f7a5e2cb1c5ebca", "text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.", "title": "" }, { "docid": "a48a88e3e6e35779392f5dea132d49f2", "text": "Community detection emerged as an important exploratory task in complex networks analysis across many scientific domains. Many methods have been proposed to solve this problem, each one with its own mechanism and sometimes with a different notion of community. In this article, we bring most common methods in the literature together in a comparative approach and reveal their performances in both real-world networks and synthetic networks. Surprisingly, many of those methods discovered better communities than the declared ground-truth communities in terms of some topological goodness features, even on benchmarking networks with built-in communities. We illustrate different structural characteristics that these methods could identify in order to support users to choose an appropriate method according to their specific requirements on different structural qualities.", "title": "" }, { "docid": "d0ec144c5239b532987157a64d499f61", "text": "(1) Disregard pseudo-queries that do not retrieve their pseudo-relevant document in the top nrank. (2) Select the top nneg retrieved documents are negative training examples. General Approach: Generate mock interaction embeddings and filter training examples down to those the most nearly match a set of template query-document pairs (given a distance function). Since interaction embeddings specific to what a model “sees,” interaction filters are model-specific.", "title": "" }, { "docid": "37482eea1f087101011ba48ac8923ecb", "text": "Routers classify packets to determine which flow they belong to, and to decide what service they should receive. Classification may, in general, be based on an arbitrary number of fields in the packet header. Performing classification quickly on an arbitrary number of fields is known to be difficult, and has poor worst-case performance. In this paper, we consider a number of classifiers taken from real networks. We find that the classifiers contain considerable structure and redundancy that can be exploited by the classification algorithm. In particular, we find that a simple multi-stage classification algorithm, called RFC (recursive flow classification), can classify 30 million packets per second in pipelined hardware, or one million packets per second in software.", "title": "" }, { "docid": "f1f424a703eefaabe8c704bd07e21a21", "text": "It is more convincing for users to have their own 3-D body shapes in the virtual fitting room when they shop clothes online. However, existing methods are limited for ordinary users to efficiently and conveniently access their 3-D bodies. We propose an efficient data-driven approach and develop an android application for 3-D body customization. Users stand naturally and their photos are taken from front and side views with a handy phone camera. They can wear casual clothes like a short-sleeved/long-sleeved shirt and short/long pants. First, we develop a user-friendly interface to semi-automatically segment the human body from photos. Then, the segmented human contours are scaled and translated to the ones under our virtual camera configurations. Through this way, we only need one camera to take photos of human in two views and do not need to calibrate the camera, which satisfy the convenience requirement. Finally, we learn body parameters that determine the 3-D body from dressed-human silhouettes with cascaded regressors. The regressors are trained using a database containing 3-D naked and dressed body pairs. Body parameters regression only costs 1.26 s on an android phone, which ensures the efficiency of our method. We invited 12 volunteers for tests, and the mean absolute estimation error for chest/waist/hip size is 2.89/1.93/2.22 centimeters. We additionally use 637 synthetic data to evaluate the main procedures of our approach.", "title": "" }, { "docid": "b9dfc489ff1bf6907929a450ea614d0b", "text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.", "title": "" }, { "docid": "3c5e3f2fe99cb8f5b26a880abfe388f8", "text": "Facial point detection is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since facial shapes vary significantly with facial expressions, poses or occlusion. In this paper, we address this problem by proposing a discriminative deep face shape model that is constructed based on an augmented factorized three-way Restricted Boltzmann Machines model. Specifically, the discriminative deep model combines the top-down information from the embedded face shape patterns and the bottom up measurements from local point detectors in a unified framework. In addition, along with the model, effective algorithms are proposed to perform model learning and to infer the true facial point locations from their measurements. Based on the discriminative deep face shape model, 68 facial points are detected on facial images in both controlled and “in-the-wild” conditions. Experiments on benchmark data sets show the effectiveness of the proposed facial point detection algorithm against state-of-the-art methods.", "title": "" }, { "docid": "0f2023682deaf2eb70c7becd8b3375dd", "text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.", "title": "" }, { "docid": "4653c085c5b91107b5eb637e45364943", "text": "Legged locomotion excels when terrains become too rough for wheeled systems or open-loop walking pattern generators to succeed, i.e., when accurate foot placement is of primary importance in successfully reaching the task goal. In this paper we address the scenario where the rough terrain is traversed with a static walking gait, and where for every foot placement of a leg, the location of the foot placement was selected irregularly by a planning algorithm. Our goal is to adjust a smooth walking pattern generator with the selection of every foot placement such that the COG of the robot follows a stable trajectory characterized by a stability margin relative to the current support triangle. We propose a novel parameterization of the COG trajectory based on the current position, velocity, and acceleration of the four legs of the robot. This COG trajectory has guaranteed continuous velocity and acceleration profiles, which leads to continuous velocity and acceleration profiles of the leg movement, which is ideally suited for advanced model-based controllers. Pitch, yaw, and ground clearance of the robot are easily adjusted automatically under any terrain situation. We evaluate our gait generation technique on the Little-Dog quadruped robot when traversing complex rocky and sloped terrains.", "title": "" }, { "docid": "8bda640f73c3941272739a57a5d02353", "text": "Researchers strive to understand eating behavior as a means to develop diets and interventions that can help people achieve and maintain a healthy weight, recover from eating disorders, or manage their diet and nutrition for personal wellness. A major challenge for eating-behavior research is to understand when, where, what, and how people eat. In this paper, we evaluate sensors and algorithms designed to detect eating activities, more specifically, when people eat. We compare two popular methods for eating recognition (based on acoustic and electromyography (EMG) sensors) individually and combined. We built a data-acquisition system using two off-the-shelf sensors and conducted a study with 20 participants. Our preliminary results show that the system we implemented can detect eating with an accuracy exceeding 90.9% while the crunchiness level of food varies. We are developing a wearable system that can capture, process, and classify sensor data to detect eating in real-time.", "title": "" }, { "docid": "23d26c14a9aa480b98bcaa633fc378e5", "text": "In this paper we present novel sensory feedbacks named ”King-Kong Effects” to enhance the sensation of walking in virtual environments. King Kong Effects are inspired by special effects in movies in which the incoming of a gigantic creature is suggested by adding visual vibrations/pulses to the camera at each of its steps. In this paper, we propose to add artificial visual or tactile vibrations (King-Kong Effects or KKE) at each footstep detected (or simulated) during the virtual walk of the user. The user can be seated, and our system proposes to use vibrotactile tiles located under his/her feet for tactile rendering, in addition to the visual display. We have designed different kinds of KKE based on vertical or lateral oscillations, physical or metaphorical patterns, and one or two peaks for heal-toe contacts simulation. We have conducted different experiments to evaluate the preferences of users navigating with or without the various KKE. Taken together, our results identify the best choices for future uses of visual and tactile KKE, and they suggest a preference for multisensory combinations. Our King-Kong effects could be used in a variety of VR applications targeting the immersion of a user walking in a 3D virtual scene.", "title": "" }, { "docid": "d0c8a1faccfa3f0469e6590cc26097c8", "text": "This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.", "title": "" }, { "docid": "2a0b81bbe867a5936dafc323d8563970", "text": "Social network analysis has gained significant attention in recent years, largely due to the success of online social networking and media-sharing sites, and the consequent availability of a wealth of social network data. In spite of the growing interest, however, there is little understanding of the potential business applications of mining social networks. While there is a large body of research on different problems and methods for social network mining, there is a gap between the techniques developed by the research community and their deployment in real-world applications. Therefore the potential business impact of these techniques is still largely unexplored.\n In this article we use a business process classification framework to put the research topics in a business context and provide an overview of what we consider key problems and techniques in social network analysis and mining from the perspective of business applications. In particular, we discuss data acquisition and preparation, trust, expertise, community structure, network dynamics, and information propagation. In each case we present a brief overview of the problem, describe state-of-the art approaches, discuss business application examples, and map each of the topics to a business process classification framework. In addition, we provide insights on prospective business applications, challenges, and future research directions. The main contribution of this article is to provide a state-of-the-art overview of current techniques while providing a critical perspective on business applications of social network analysis and mining.", "title": "" }, { "docid": "2faf7fedadfd8b24c4740f7100cf5fec", "text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.", "title": "" } ]
scidocsrr
eb1a80981b9b86b523dda13cfc2d674d
Japanese Society for Cancer of the Colon and Rectum (JSCCR) Guidelines 2014 for treatment of colorectal cancer
[ { "docid": "b966af7f15e104865944ac44fad23afc", "text": "Five cases are described where minute foci of adenocarcinoma have been demonstrated in the mesorectum several centimetres distal to the apparent lower edge of a rectal cancer. In 2 of these there was no other evidence of lymphatic spread of the tumour. In orthodox anterior resection much of this tissue remains in the pelvis, and its is suggested that these foci might lead to suture-line or pelvic recurrence. Total excision of the mesorectum has, therefore, been carried out as a part of over 100 consecutive anterior resections. Fifty of these, which were classified as 'curative' or 'conceivably curative' operations, have now been followed for over 2 years with no pelvic or staple-line recurrence.", "title": "" }, { "docid": "bc4a72d96daf03f861b187fa73f57ff6", "text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.", "title": "" } ]
[ { "docid": "29c8c8abf86b2d7358a1cd70751f3f93", "text": "Data domain description concerns the characterization of a data set. A good description covers all target data but includes no superfluous space. The boundary of a dataset can be used to detect novel data or outliers. We will present the Support Vector Data Description (SVDD) which is inspired by the Support Vector Classifier. It obtains a spherically shaped boundary around a dataset and analogous to the Support Vector Classifier it can be made flexible by using other kernel functions. The method is made robust against outliers in the training set and is capable of tightening the description by using negative examples. We show characteristics of the Support Vector Data Descriptions using artificial and real data.", "title": "" }, { "docid": "c4183c8b08da8d502d84a650d804cac8", "text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>", "title": "" }, { "docid": "381d42fca0f242c10d115113c7a33c67", "text": "Abstract. We present a detailed workload characterization of a multi-tiered system that hosts an e-commerce site. Using the TPC-W workload and via experimental measurements, we illustrate how workload characteristics affect system behavior and operation, focusing on the statistical properties of dynamic page generation. This analysis allows to identify bottlenecks and the system conditions under which there is degradation in performance. Consistent with the literature, we find that the distribution of the dynamic page generation is heavy-tailed, which is caused by the interaction of the database server with the storage system. Furthermore, by examining the queuing behavior at the database server, we present experimental evidence of the existence of statistical correlation in the distribution of dynamic page generation times, especially under high load conditions. We couple this observation with the existence (and switching) of bottlenecks in the system.", "title": "" }, { "docid": "dcc10f93667d23ed3af321086114f261", "text": "Background: Silver nanoparticles (SNPs) are used extensively in areas such as medicine, catalysis, electronics, environmental science, and biotechnology. Therefore, facile synthesis of SNPs from an eco-friendly, inexpensive source is a prerequisite. In the present study, fabrication of SNPs from the leaf extract of Butea monosperma (Flame of Forest) has been performed. SNPs were synthesized from 1% leaf extract solution and characterized by ultraviolet-visible (UV-vis) spectroscopy and transmission electron microscopy (TEM). The mechanism of SNP formation was studied by Fourier transform infrared (FTIR), and anti-algal properties of SNPs on selected toxic cyanobacteria were evaluated. Results: TEM analysis indicated that size distribution of SNPs was under 5 to 30 nm. FTIR analysis indicated the role of amide I and II linkages present in protein in the reduction of silver ions. SNPs showed potent anti-algal properties on two cyanobacteria, namely, Anabaena spp. and Cylindrospermum spp. At a concentration of 800 μg/ml of SNPs, maximum anti-algal activity was observed in both cyanobacteria. Conclusions: This study clearly demonstrates that small-sized, stable SNPs can be synthesized from the leaf extract of B. monosperma. SNPs can be effectively employed for removal of toxic cyanobacteria.", "title": "" }, { "docid": "9d33565dbd5148730094a165bb2e968f", "text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.", "title": "" }, { "docid": "ba2cc10384c8be27ca0251c574998a1b", "text": "As the extension of Distributed Denial-of-Service (DDoS) attacks to application layer in recent years, researchers pay much interest in these new variants due to a low-volume and intermittent pattern with a higher level of stealthiness, invaliding the state-of-the-art DDoS detection/defense mechanisms. We describe a new type of low-volume application layer DDoS attack--Tail Attacks on Web Applications. Such attack exploits a newly identified system vulnerability of n-tier web applications (millibottlenecks with sub-second duration and resource contention with strong dependencies among distributed nodes) with the goal of causing the long-tail latency problem of the target web application (e.g., 95th percentile response time > 1 second) and damaging the long-term business of the service provider, while all the system resources are far from saturation, making it difficult to trace the cause of performance degradation.\n We present a modified queueing network model to analyze the impact of our attacks in n-tier architecture systems, and numerically solve the optimal attack parameters. We adopt a feedback control-theoretic (e.g., Kalman filter) framework that allows attackers to fit the dynamics of background requests or system state by dynamically adjusting attack parameters. To evaluate the practicality of such attacks, we conduct extensive validation through not only analytical, numerical, and simulation results but also real cloud production setting experiments via a representative benchmark website equipped with state-of-the-art DDoS defense tools. We further proposed a solution to detect and defense the proposed attacks, involving three stages: fine-grained monitoring, identifying bursts, and blocking bots.", "title": "" }, { "docid": "bf7b3cdb178fd1969257f56c0770b30b", "text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.", "title": "" }, { "docid": "e50d156bde3479c27119231073705f70", "text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.", "title": "" }, { "docid": "112f7444f0881bf940d056a96c6f5ee3", "text": "This paper describes our approach on “Information Extraction from Microblogs Posted during Disasters”as an attempt in the shared task of the Microblog Track at Forum for Information Retrieval Evaluation (FIRE) 2016 [2]. Our method uses vector space word embeddings to extract information from microblogs (tweets) related to disaster scenarios, and can be replicated across various domains. The system, which shows encouraging performance, was evaluated on the Twitter dataset provided by the FIRE 2016 shared task. CCS Concepts •Computing methodologies→Natural language processing; Information extraction;", "title": "" }, { "docid": "a9242c3fca5a8ffdf0e03776b8165074", "text": "This paper presents inexpensive computer vision techniques allowing to measure the texture characteristics of woven fabric, such as weave repeat and yarn counts, and the surface roughness. First, we discuss the automatic recognition of weave pattern and the accurate measurement of yarn counts by analyzing fabric sample images. We propose a surface roughness indicator FDFFT, which is the 3-D surface fractal dimension measurement calculated from the 2-D fast Fourier transform of high-resolution 3-D surface scan. The proposed weave pattern recognition method was validated by using computer-simulated woven samples and real woven fabric images. All weave patterns of the tested fabric samples were successfully recognized, and computed yarn counts were consistent with the manual counts. The rotation invariance and scale invariance of FDFFT were validated with fractal Brownian images. Moreover, to evaluate the correctness of FDFFT, we provide a method of calculating standard roughness parameters from the 3-D fabric surface. According to the test results, we demonstrated that FDFFT is a fast and reliable parameter for fabric roughness measurement based on 3-D surface data.", "title": "" }, { "docid": "237a88ea092d56c6511bb84604e6a7c7", "text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.", "title": "" }, { "docid": "5350ffea7a4187f0df11fd71562aba43", "text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.", "title": "" }, { "docid": "7d9162b079a155f48688a1d70af5482a", "text": "Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. However, as intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbances, 590 nm over 450 nm, is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantitation down to 50 ng of bovine serum albumin. Furthermore, protein assay in presence of up to 35-fold weight excess of sodium dodecyl sulfate (detergent) over bovine serum albumin (protein) can be performed. A linear equation that perfectly fits the experimental data is provided on the basis of mass action and Beer's law.", "title": "" }, { "docid": "867c8c0286c0fed4779f550f7483770d", "text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.", "title": "" }, { "docid": "348c62670a729da42654f0cf685bba53", "text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.", "title": "" }, { "docid": "1a99b71b6c3c33d97c235a4d72013034", "text": "Crowdfunding systems are social media websites that allow people to donate small amounts of money that add up to fund valuable larger projects. These websites are structured around projects: finite campaigns with welldefined goals, end dates, and completion criteria. We use a dataset from an existing crowdfunding website — the school charity Donors Choose — to understand the value of completing projects. We find that completing a project is an important act that leads to larger donations (over twice as large), greater likelihood of returning to donate again, and few projects that expire close but not complete. A conservative estimate suggests that this completion bias led to over $15 million in increased donations to Donors Choose, representing approximately 16% of the total donations for the period under study. This bias suggests that structuring many types of collaborative work as a series of projects might increase contribution significantly. Many social media creators find it rather difficult to motivate users to actively participate and contribute their time, energy, or money to make a site valuable to others. The value in social media largely derives from interactions between and among people who are working together to achieve common goals. To encourage people to participate and contribute, social media creators regularly look for different ways of structuring participation. Some use a blog-type format, such as Facebook, Twitter, or Tumblr. Some use a collaborative document format like Wikipedia. And some use a project-based format. A project is a well-defined set of tasks that needs to be accomplished. Projects usually have a well-defined end goal — something that needs to be accomplished for the project to be considered a success — and an end date — a day by which the project needs to be completed. Much work in society is structured around projects; for example, Hollywood makes movies by organizing each movie’s production as a project, hiring a new crew for each movie. Construction companies organize their work as a sequence of projects. And projects are common in knowledge-work based businesses (?). Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Another important place we see project-based organization is in crowdfunding websites. Crowdfunding is a relatively new phenomenon that merges modern social web technologies with project-based fundraising. It is a new form of social media that publicizes projects that need money, and allows the crowd to each make a small contribution toward the larger project. By aggregating many small donations, crowdfunding websites can fund large and interesting projects of all kinds. Kickstarter, IndieGoGo, Spot.Us, and Donors Choose are examples of crowdfunding websites targeted at specific types of projects (creative, entrepreneurial, journalism, and classroom projects respectively). Crowdfunding is becoming an increasingly popular tool for enabling project-based work. Kickstarter, for example, has raised over $400 million for over 35,000 creative projects, and Donors Choose has raised over $90 million for over 200,000 classroom projects. Additionally, crowdfunding websites represent potential new business models for a number of industries, including some struggling to find viable revenue streams: Sellaband has proven successful in helping musicians fund the creation and distribution of their music; and Spot.Us enables journalists to fund and publish investigative news. In this paper, I seek to understand why crowdfunding systems that are organized around projects are successful. Using a dataset from Donors Choose, a crowdfunding charity that funds classroom projects for K–12 school teachers, I find that completing a project is a powerful motivator that helps projects succeed in the presence of a crowd: donations that complete a project are over twice as large as normal donations. People who make these donations are more likely to return and donate in the future, and their future donations are larger. And few projects get close to completion but fail. Together, these results suggest that completing the funding for a project is an important act for the crowd, and structuring the fundraising around completable projects helps enable success. This also has implications for other types of collaborative technologies. Background and Related Ideas", "title": "" }, { "docid": "26052ad31f5ccf55398d6fe3b9850674", "text": "An electroneurographic study performed on the peripheral nerves of 25 patients with severe cirrhosis following viral hepatitis showed slight slowing (P > 0.05) of motor conduction velocity (CV) and significant diminution (P < 0.001) of sensory CV and mixed sensorimotor-evoked potentials, associated with a significant decrease in the amplitude of sensory evoked potentials. The slowing was about equal in the distal (digital) and in the proximal segments of the same nerve. A mixed axonal degeneration and segmental demyelination is presumed to explain these findings. The CV measurements proved helpful for an early diagnosis of hepatic polyneuropathy showing subjective symptoms in the subclinical stage. Elektroneurographische Untersuchungen der peripheren Nerven bei 25 Patienten mit postviralen Leberzirrhosen ergaben folgendes: geringe Verminderung (P > 0.05) der motorischen Leitgeschwindigkeit (LG) und eine signifikant verlangsamte LG in sensiblen Fasern (P < 0.001), in beiden proximalen und distalen Fasern. Es wurde in den gemischten evozierten Potentialen eine Verlangsamung der LG festgestellt, zwischen den Werten der motorischen und sensiblen Fasern. Gleichzeitig wurde eine Minderung der Amplitude des NAP beobachtet. Diese Befunde sprechen für eine axonale Degeneration und eine Demyelinisierung in den meisten untersuchten peripheren Nerven. Elektroneurographische Untersuchungen erlaubten den funktionellen Zustand des peripheren Nervens abzuschätzen und bestimmte Veränderungen bereits im Initialstadium der Erkrankung aufzudecken, wenn der Patient noch keine klinischen Zeichen einer peripheren Neuropathie bietet.", "title": "" }, { "docid": "709aa1bc4ace514e46f7edbb07fb03a9", "text": "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy (less than 1 kcal/mol mean absolute error) and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.", "title": "" }, { "docid": "8eb0f822b4e8288a6b78abf0bf3aecbb", "text": "Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.", "title": "" }, { "docid": "9e6bfc7b5cc87f687a699c62da013083", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" } ]
scidocsrr
3a679b1cf471a4c3223668d27ae4f340
Understanding the requirements for developing open source software systems
[ { "docid": "c63d32013627d0bcea22e1ad62419e62", "text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.", "title": "" } ]
[ { "docid": "f944f5e334a127cd50ab3ec0d3c2b603", "text": "First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by linearly coupling the two. We show how to reconstruct Nesterov’s accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov’s original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov’s methods cannot apply to. 1998 ACM Subject Classification G.1.6 Optimization, F.2 Analysis of Algorithms and Problem Complexity", "title": "" }, { "docid": "2ddf013dc4e0fc5e35823e0485777066", "text": "The aim of this work is to design a SLAM algorithm for localization and mapping of aerial platform for ocean observation. The aim is to determine the direction of travel, given that the aerial platform flies over the water surface and in an environment with few static features and dynamic background. This approach is inspired by the bird techniques which use landmarks as navigation direction. In this case, the blimp is chosen as the platform, therefore the payload is the most important concern in the design so that the desired lift can be achieved. The results show the improved SLAM is were able to achieve the desired waypoint.", "title": "" }, { "docid": "934532bd18f37112c7362db0fffa89a0", "text": "Combination therapies exploit the chances for better efficacy, decreased toxicity, and reduced development of drug resistance and owing to these advantages, have become a standard for the treatment of several diseases and continue to represent a promising approach in indications of unmet medical need. In this context, studying the effects of a combination of drugs in order to provide evidence of a significant superiority compared to the single agents is of particular interest. Research in this field has resulted in a large number of papers and revealed several issues. Here, we propose an overview of the current methodological landscape concerning the study of combination effects. First, we aim to provide the minimal set of mathematical and pharmacological concepts necessary to understand the most commonly used approaches, divided into effect-based approaches and dose-effect-based approaches, and introduced in light of their respective practical advantages and limitations. Then, we discuss six main common methodological issues that scientists have to face at each step of the development of new combination therapies. In particular, in the absence of a reference methodology suitable for all biomedical situations, the analysis of drug combinations should benefit from a collective, appropriate, and rigorous application of the concepts and methods reviewed here.", "title": "" }, { "docid": "fd45363f75f9206aa13e139d784e5d52", "text": "Multivariate pattern recognition methods are increasingly being used to identify multiregional brain activity patterns that collectively discriminate one cognitive condition or experimental group from another, using fMRI data. The performance of these methods is often limited because the number of regions considered in the analysis of fMRI data is large compared to the number of observations (trials or participants). Existing methods that aim to tackle this dimensionality problem are less than optimal because they either over-fit the data or are computationally intractable. Here, we describe a novel method based on logistic regression using a combination of L1 and L2 norm regularization that more accurately estimates discriminative brain regions across multiple conditions or groups. The L1 norm, computed using a fast estimation procedure, ensures a fast, sparse and generalizable solution; the L2 norm ensures that correlated brain regions are included in the resulting solution, a critical aspect of fMRI data analysis often overlooked by existing methods. We first evaluate the performance of our method on simulated data and then examine its effectiveness in discriminating between well-matched music and speech stimuli. We also compared our procedures with other methods which use either L1-norm regularization alone or support vector machine-based feature elimination. On simulated data, our methods performed significantly better than existing methods across a wide range of contrast-to-noise ratios and feature prevalence rates. On experimental fMRI data, our methods were more effective in selectively isolating a distributed fronto-temporal network that distinguished between brain regions known to be involved in speech and music processing. These findings suggest that our method is not only computationally efficient, but it also achieves the twin objectives of identifying relevant discriminative brain regions and accurately classifying fMRI data.", "title": "" }, { "docid": "3380a9a220e553d9f7358739e3f28264", "text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.", "title": "" }, { "docid": "c4062390a6598f4e9407d29e52c1a3ed", "text": "We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially higher fractions of the more compact Drosophila melanogaster (37%-53%), Caenorhabditis elegans (18%-37%), and Saccharaomyces cerevisiae (47%-68%) genomes. From yeasts to vertebrates, in order of increasing genome size and general biological complexity, increasing fractions of conserved bases are found to lie outside of the exons of known protein-coding genes. In all groups, the most highly conserved elements (HCEs), by log-odds score, are hundreds or thousands of bases long. These elements share certain properties with ultraconserved elements, but they tend to be longer and less perfectly conserved, and they overlap genes of somewhat different functional categories. In vertebrates, HCEs are associated with the 3' UTRs of regulatory genes, stable gene deserts, and megabase-sized regions rich in moderately conserved noncoding sequences. Noncoding HCEs also show strong statistical evidence of an enrichment for RNA secondary structure.", "title": "" }, { "docid": "fc509e8f8c0076ad80df5ff6ee6b6f1e", "text": "The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.", "title": "" }, { "docid": "60664c058868f08a67d14172d87a4756", "text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.", "title": "" }, { "docid": "98df4ff146fe0067c87a3b5514ea0934", "text": "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.", "title": "" }, { "docid": "9afc0411331ac43bc54df639760813af", "text": "Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.", "title": "" }, { "docid": "cbfffcdb150143ccacaf3700aadea59e", "text": "Recurrent Neural Networks (RNNs), and specifically a variant with Long ShortTerm Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.", "title": "" }, { "docid": "6f05e76961d4ef5fc173bafd5578081f", "text": "Edmodo is simply a controlled online networking application that can be used by teachers and students to communicate and remain connected. This paper explores the experiences from a group of students who were using Edmodo platform in their course work. It attempts to use the SAMR (Substitution, Augmentation, Modification and Redefinition) framework of technology integration in education to access and evaluate technology use in the classroom. The respondents were a group of 62 university students from a Kenyan University whose lecturer had created an Edmodo account and introduced the students to participate in their course work during the September to December 2015 semester. More than 82% of the students found that they had a personal stake in the quality of work presented through the platforms and that they were able to take on different subtopics and collaborate to create one final product. This underscores the importance of Edmodo as an environment with skills already in the hands of the students that we can use to integrate technology in the classroom.", "title": "" }, { "docid": "e4e0e01b3af99dfd88ff03a1057b40d3", "text": "There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation - events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.", "title": "" }, { "docid": "7bfd3237b1a4c3c651b4c5389019f190", "text": "Recent developments in web technologies including evolution of web standards, improvements in browser performance, and the emergence of free and open-source software (FOSS) libraries are driving a general shift from server-side to client-side web applications where a greater share of the computational load is transferred to the browser. Modern client-side approaches allow for improved user interfaces that rival traditional desktop software, as well as the ability to perform simulations and visualizations within the browser. We demonstrate the use of client-side technologies to create an interactive web application for a simulation model of biochemical oxygen demand and dissolved oxygen in rivers called the Webbased Interactive River Model (WIRM). We discuss the benefits, limitations and potential uses of client-side web applications, and provide suggestions for future research using new and upcoming web technologies such as offline access and local data storage to create more advanced client-side web applications for environmental simulation modeling. 2014 Elsevier Ltd. All rights reserved. Software availability Product Title: Web-based Interactive River Model (WIRM) Developer: Jeffrey D. Walker Contact Address: Dept. of Civil and Environmental Engineering, Tufts University, 200 College Ave, Medford, MA 02155 Contact E-mail: [email protected] Available Since: 2013 Programming Language: JavaScript, Python Availability: http://wirm.walkerjeff.com/ Cost: Free", "title": "" }, { "docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79", "text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.", "title": "" }, { "docid": "bcdb0e6dcbab8fcccfea15edad00a761", "text": "This article presents the 1:4 wideband balun based on transmission lines that was awarded the first prize in the Wideband Baluns Student Design Competition. The competition was held during the 2014 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2014). It was initiated in 2011 and is sponsored by the MTT-17 Technical Coordinating Committee. The winner must implement and measure a wideband balun of his or her own design and achieve the highest possible operational frequency from at least 1 MHz (or below) while meeting the following conditions: ? female subminiature version A (SMA) connectors are used to terminate all ports ? a minimum impedance transformation ratio of two ? a maximum voltage standing wave ratio (VSWR) of 2:1 at all ports ? an insertion loss of less than 1 dB ? a common-mode rejection ratio (CMRR) of more than 25 dB ? imbalance of less than 1 dB and 2.5?.", "title": "" }, { "docid": "aad2d6385cb8c698a521caea00fe56d2", "text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some", "title": "" }, { "docid": "5392e45840929b05b549a64a250774e5", "text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.", "title": "" }, { "docid": "1e80f38e3ccc1047f7ee7c2b458c0beb", "text": "This thesis presents an approach to robot arm control exploiting natural dynamics. The approach consists of using a compliant arm whose joints are controlled with simple non-linear oscillators. The arm has special actuators which makes it robust to collisions and gives it a smooth compliant, motion. The oscillators produce rhythmic commands of the joints of the arm, and feedback of the joint motions is used to modify the oscillator behavior. The oscillators enable the resonant properties of the arm to be exploited to perform a variety of rhythmic and discrete tasks. These tasks include tuning into the resonant frequencies of the arm itself, juggling, turning cranks, playing with a Slinky toy, sawing wood, throwing balls, hammering nails and drumming. For most of these tasks, the controllers at each joint are completely independent, being coupled by mechanical coupling through the physical arm of the robot. The thesis shows that this mechanical coupling allows the oscillators to automatically adjust their commands to be appropriate for the arm dynamics and the task. This coordination is robust to large changes in the oscillator parameters, and large changes in the dynamic properties of the arm. As well as providing a wealth of experimental data to support this approach, the thesis also provides a range of analysis tools, both approximate and exact. These can be used to understand and predict the behavior of current implementations, and design new ones. These analysis techniques improve the value of oscillator solutions. The results in the thesis suggest that the general approach of exploiting natural dynamics is a powerful method for obtaining coordinated dynamic behavior of robot arms. Thesis Supervisor: Rodney A. Brooks Title: Professor of Electrical Engineering and Computer Science, MIT 5.4. CASE (C): MODIFYING THE NATURAL DYNAMICS 95", "title": "" }, { "docid": "a987f009509e9c4f5c29b27275713eac", "text": "PURPOSE\nThis article provides a critical overview of problem-based learning (PBL), its effectiveness for knowledge acquisition and clinical performance, and the underlying educational theory. The focus of the paper is on (1) the credibility of claims (both empirical and theoretical) about the ties between PBL and educational outcomes and (2) the magnitude of the effects.\n\n\nMETHOD\nThe author reviewed the medical education literature, starting with three reviews published in 1993 and moving on to research published from 1992 through 1998 in the primary sources for research in medical education. For each study the author wrote a summary, which included study design, outcome measures, effect sizes, and any other information relevant to the research conclusion.\n\n\nRESULTS AND CONCLUSION\nThe review of the literature revealed no convincing evidence that PBL improves knowledge base and clinical performance, at least not of the magnitude that would be expected given the resources required for a PBL curriculum. The results were considered in light of the educational theory that underlies PBL and its basic research. The author concludes that the ties between educational theory and research (both basic and applied) are loose at best.", "title": "" } ]
scidocsrr
6cf17f7076502c1c982b5c3f6ae43bd3
Gaussian Processes for Rumour Stance Classification in Social Media
[ { "docid": "9ae491c47c20a746eb13f3370217a8fa", "text": "The open structure of online social networks and their uncurated nature give rise to problems of user credibility and influence. In this paper, we address the task of predicting the impact of Twitter users based only on features under their direct control, such as usage statistics and the text posted in their tweets. We approach the problem as regression and apply linear as well as nonlinear learning methods to predict a user impact score, estimated by combining the numbers of the user’s followers, followees and listings. The experimental results point out that a strong prediction performance is achieved, especially for models based on the Gaussian Processes framework. Hence, we can interpret various modelling components, transforming them into indirect ‘suggestions’ for impact boosting.", "title": "" } ]
[ { "docid": "fe2b8921623f3bcf7b8789853b45e912", "text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.", "title": "" }, { "docid": "dc23ec643882393b69adca86c944bef4", "text": "This memo describes a snapshot of the reasoning behind a proposed new namespace, the Host Identity namespace, and a new protocol layer, the Host Identity Protocol (HIP), between the internetworking and transport layers. Herein are presented the basics of the current namespaces, their strengths and weaknesses, and how a new namespace will add completeness to them. The roles of this new namespace in the protocols are defined. The memo describes the thinking of the authors as of Fall 2003. The architecture may have evolved since. This document represents one stable point in that evolution of understanding.", "title": "" }, { "docid": "8ea2dadd6024e2f1b757818e0c5d76fa", "text": "BACKGROUND\nLysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study.\n\n\nMETHOD\nA total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session.\n\n\nRESULTS\nLSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking.\n\n\nCONCLUSIONS\nThe present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.", "title": "" }, { "docid": "05b362c5dd31decd8d0d33ba45a36783", "text": "Behavioral interventions preceded by a functional analysis have been proven efficacious in treating severe problem behavior associated with autism. There is, however, a lack of research showing socially validated outcomes when assessment and treatment procedures are conducted by ecologically relevant individuals in typical settings. In this study, interview-informed functional analyses and skill-based treatments (Hanley et al. in J Appl Behav Anal 47:16-36, 2014) were applied by a teacher and home-based provider in the classroom and home of two children with autism. The function-based treatments resulted in socially validated reductions in severe problem behavior (self-injury, aggression, property destruction). Furthermore, skills lacking in baseline-functional communication, denial and delay tolerance, and compliance with adult instructions-occurred with regularity following intervention. The generality and costs of the process are discussed.", "title": "" }, { "docid": "39cf15285321c7d56904c8c59b3e1373", "text": "J. Naidoo1*, D. B. Page2, B. T. Li3, L. C. Connell3, K. Schindler4, M. E. Lacouture5,6, M. A. Postow3,6 & J. D. Wolchok3,6 Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore; Providence Portland Medical Center and Earl A. Chiles Research Institute, Portland; Department of Medicine and Ludwig Center, Memorial Sloan Kettering Cancer Center, New York, USA; Department of Dermatology, Medical University of Vienna, Vienna, Austria; Dermatology Service, Memorial Sloan Kettering Cancer Center, New York; Department of Medicine, Weill Cornell Medical College, New York, USA", "title": "" }, { "docid": "711ad6f6641b916f25f08a32d4a78016", "text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "20def85748f9d2f71cd34c4f0ca7f57c", "text": "Recent advances in artificial intelligence (AI) and machine learning, combined with developments in neuromorphic hardware technologies and ubiquitous computing, promote machines to emulate human perceptual and cognitive abilities in a way that will continue the trend of automation for several upcoming decades. Despite the gloomy scenario of automation as a job eliminator, we argue humans and machines can cross-fertilise in a way that forwards a cooperative coexistence. We build our argument on three pillars: (i) the economic mechanism of automation, (ii) the dichotomy of ‘experience’ that separates the first-person perspective of humans from artificial learning algorithms, and (iii) the interdependent relationship between humans and machines. To realise this vision, policy makers have to implement alternative educational approaches that support lifelong training and flexible job transitions.", "title": "" }, { "docid": "f5d8c506c9f25bff429cea1ed4c84089", "text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.", "title": "" }, { "docid": "100c152685655ad6865f740639dd7d57", "text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.", "title": "" }, { "docid": "23a329c63f9a778e3ec38c25fa59748a", "text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.", "title": "" }, { "docid": "dc810b43c71ab591981454ad20e34b7a", "text": "This paper proposes a real-time variable-Q non-stationary Gabor transform (VQ-NSGT) system for speech pitch shifting. The system allows for time-frequency representations of speech on variable-Q (VQ) with perfect reconstruction and computational efficiency. The proposed VQ-NSGT phase vocoder can be used for pitch shifting by simple frequency translation (transposing partials along the frequency axis) instead of spectral stretching in frequency domain by the Fourier transform. In order to retain natural sounding pitch shifted speech, a hybrid of smoothly varying Q scheme is used to retain the formant structure of the original signal at both low and high frequencies. Moreover, the preservation of transients of speech are improved due to the high time resolution of VQ-NSGT at high frequencies. A sliced VQ-NSGT is used to retain inter-partials phase coherence by synchronized overlap-add method. Therefore, the proposed system lends itself to real-time processing while retaining the formant structure of the original signal and inter-partial phase coherence. The simulation results showed that the proposed approach is suitable for pitch shifting of both speech and music signals.", "title": "" }, { "docid": "f9c4f413618d94b78b96c8cb188e09c5", "text": "We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k2) to O(k) where k is the number of samples. We use the column wise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our 1This work was supported in part by the Nanyang Assistant Professorship (M4080134), JSPSNTU joint project (M4080882), Natural Science Foundation of China (61105013), and National Science and Technology Pillar Program (2012BAI14B03). Part of this work was done when Yang Cong was a research fellow at NTU. Preprint submitted to Pattern Recognition January 30, 2013 method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method.", "title": "" }, { "docid": "7d32ed1dbd25e7845bf43f58f42be34a", "text": "ETHNOPHARMACOLOGICAL RELEVANCE\nSenna occidentalis, Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and Albizia schimperiana are traditionally used for treatment of various ailments including helminth infection in Ethiopia.\n\n\nMATERIALS AND METHODS\nIn vitro egg hatch assay and larval development tests were conducted to determine the possible anthelmintic effects of crude aqueous and hydro-alcoholic extracts of the leaves of Senna occidentalis, aerial parts of Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and stem bark of Albizia schimperiana on eggs and larvae of Haemonchus contortus.\n\n\nRESULTS\nBoth aqueous and hydro-alcoholic extracts of Leucas martinicensis, Leonotis ocymifolia and aqueous extract of Senna occidentalis and Albizia schimperiana induced complete inhibition of egg hatching at concentration less than or equal to 1mg/ml. Aqueous and hydro-alcoholic extracts of all tested medicinal plants have shown statistically significant and dose dependent egg hatching inhibition. Based on ED(50), the most potent extracts were aqueous and hydro-alcoholic extracts of Leucas martinicensis (0.09 mg/ml), aqueous extracts of Rumex abyssinicus (0.11 mg/ml) and Albizia schimperiana (0.11 mg/ml). Most of the tested plant extracts have shown remarkable larval development inhibition. Aqueous extracts of Leonotis ocymifolia, Leucas martinicensis, Albizia schimperiana and Senna occidentalis induced 100, 99.85, 99.31, and 96.36% inhibition of larval development, respectively; while hydro-alcoholic extracts of Albizia schimperiana induced 99.09 inhibition at the highest concentration tested (50mg/ml). Poor inhibition was recorded for hydro-alcoholic extracts of Senna occidentalis (9%) and Leonotis ocymifolia (37%) at 50mg/ml.\n\n\nCONCLUSIONS\nThe overall findings of the current study indicated that the evaluated medicinal plants have potential anthelmintic effect and further in vitro and in vivo evaluation is indispensable to make use of these plants.", "title": "" }, { "docid": "f97093a848329227f363a8a073a6334a", "text": "With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.", "title": "" }, { "docid": "bfde0c836406a25a08b7c95b330aaafa", "text": "The concept of agile process models has gained great popularity in software (SW) development community in past few years. Agile models promote fast development. This property has certain drawbacks, such as poor documentation and bad quality. Fast development promotes use of agile process models in small-scale projects. This paper modifies and evaluates extreme programming (XP) process model and proposes a novel adaptive process mode based on these modifications. 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a8e665f8b7ea7473e5f7095d12db00ce", "text": "Although there has been considerable progress in reducing cancer incidence in the United States, the number of cancer survivors continues to increase due to the aging and growth of the population and improvements in survival rates. As a result, it is increasingly important to understand the unique medical and psychosocial needs of survivors and be aware of resources that can assist patients, caregivers, and health care providers in navigating the various phases of cancer survivorship. To highlight the challenges and opportunities to serve these survivors, the American Cancer Society and the National Cancer Institute estimated the prevalence of cancer survivors on January 1, 2012 and January 1, 2022, by cancer site. Data from Surveillance, Epidemiology, and End Results (SEER) registries were used to describe median age and stage at diagnosis and survival; data from the National Cancer Data Base and the SEER-Medicare Database were used to describe patterns of cancer treatment. An estimated 13.7 million Americans with a history of cancer were alive on January 1, 2012, and by January 1, 2022, that number will increase to nearly 18 million. The 3 most prevalent cancers among males are prostate (43%), colorectal (9%), and melanoma of the skin (7%), and those among females are breast (41%), uterine corpus (8%), and colorectal (8%). This article summarizes common cancer treatments, survival rates, and posttreatment concerns and introduces the new National Cancer Survivorship Resource Center, which has engaged more than 100 volunteer survivorship experts nationwide to develop tools for cancer survivors, caregivers, health care professionals, advocates, and policy makers.", "title": "" }, { "docid": "582b9c59e07922ae3d5b01309e030bba", "text": "This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n2 logn) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.", "title": "" }, { "docid": "00f8c6d7fd58f06fc2672443de9773b7", "text": "The utility industry has invested widely in smart grid (SG) over the past decade. They considered it the future electrical grid while the information and electricity are delivered in two-way flow. SG has many Artificial Intelligence (AI) applications such as Artificial Neural Network (ANN), Machine Learning (ML) and Deep Learning (DL). Recently, DL has been a hot topic for AI applications in many fields such as time series load forecasting. This paper introduces the common algorithms of DL in the literature applied to load forecasting problems in the SG and power systems. The intention of this survey is to explore the different applications of DL that are used in the power systems and smart grid load forecasting. In addition, it compares the accuracy results RMSE and MAE for the reviewed applications and shows the use of convolutional neural network CNN with k-means algorithm had a great percentage of reduction in terms of RMSE.", "title": "" }, { "docid": "81537ba56a8f0b3beb29a03ed3c74425", "text": "About ten years ago, soon after the Web’s birth, Web “search engines” were first by word of mouth. Soon, however, automated search engines became a world wide phenomenon, especially AltaVista at the beginning. I was pleasantly surprised by the amount and diversity of information made accessible by the Web search engines even in the mid 1990’s. The growth of the available Web pages is beyond most, if not all, people’s imagination. The search engines enabled people to find information, facts, and references among these Web pages.", "title": "" }, { "docid": "04abe3f22084ab74ed3db8cbda680f62", "text": "Standard targets are typically used for structural (white-box) evaluation of fingerprint readers, e.g., for calibrating imaging components of a reader. However, there is no standard method for behavioral (black-box) evaluation of fingerprint readers in operational settings where variations in finger placement by the user are encountered. The goal of this research is to design and fabricate 3D targets for repeatable behavioral evaluation of fingerprint readers. 2D calibration patterns with known characteristics (e.g., sinusoidal gratings of pre-specified orientation and frequency, and fingerprints with known singular points and minutiae) are projected onto a generic 3D finger surface to create electronic 3D targets. A state-of-the-art 3D printer (Stratasys Objet350 Connex) is used to fabricate wearable 3D targets with materials similar in hardness and elasticity to the human finger skin. The 3D printed targets are cleaned using 2M NaOH solution to obtain evaluation-ready 3D targets. Our experimental results show that: 1) features present in the 2D calibration pattern are preserved during the creation of the electronic 3D target; 2) features engraved on the electronic 3D target are preserved during the physical 3D target fabrication; and 3) intra-class variability between multiple impressions of the physical 3D target is small. We also demonstrate that the generated 3D targets are suitable for behavioral evaluation of three different (500/1000 ppi) PIV/Appendix F certified optical fingerprint readers in the operational settings.", "title": "" } ]
scidocsrr
3f85ab24763b17b0e940da68b34bb844
Computational personality traits assessment: A review
[ { "docid": "1378ab6b9a77dba00beb63c27b1addf6", "text": "Whenever we listen to or meet a new person we try to predict personality attributes of the person. Our behavior towards the person is hugely influenced by the predictions we make. Personality is made up of the characteristic patterns of thoughts, feelings and behaviors that make a person unique. Your personality affects your success in the role. Recognizing about yourself and reflecting on your personality can help you to understand how you might shape your future. Various approaches like personality prediction through speech, facial expression, video, and text are proposed in literature to recognize personality. Personality predictions can be made out of one’s handwriting as well. The objective of this paper is to discuss methodology used to identify personality through handwriting analysis and present current state-of-art related to it.", "title": "" }, { "docid": "c0d794e7275e7410998115303bf0cf79", "text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.", "title": "" } ]
[ { "docid": "7ebf04cde2f938787dac4718e768efe1", "text": "With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of This work is supported by National Basic Research Program of China (973 Program Grant No. 2013CB329105), National Natural Science Foundation of China (Grants No. 61301080 and No. 61171065), Chinese National Major Scientific and Technological Specialized Project (No. 2013ZX03002001), Chinas Next Generation Internet (No. CNGI-12-03-007), and ZTE Corporation. M. Yang School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, P. R. China E-mail: [email protected] Y. Li · D. Jin · L. Zeng Department of Electronic Engineering, Tsinghua University, Beijing 100084, P. R. China Y. Li E-mail: [email protected] D. Jin, L. Zeng E-mail: {jindp, zenglg}@mail.tsinghua.edu.cn Xin Wu Big Switch, USA E-mail: [email protected] A. V. Vasilakos Department of Computer and Telecommunications Engineering,University of Western Macedonia, Greece Electrical and Computer Engineering, National Technical University of Athens (NTUA), Greece E-mail: [email protected] MWN and significantly benefit the future mobile and wireless network.", "title": "" }, { "docid": "e708fc43b5ac8abf8cc2707195e8a45e", "text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.", "title": "" }, { "docid": "ac1f2a1a96ab424d9b69276efd4f1ed4", "text": "This paper describes various systems from the University of Minnesota, Duluth that participated in the CLPsych 2015 shared task. These systems learned decision lists based on lexical features found in training data. These systems typically had average precision in the range of .70 – .76, whereas a random baseline attained .47 – .49.", "title": "" }, { "docid": "19e09b1c0eb3646e5ae6484524f82e10", "text": "Results from 12 switchback field trials involving 1216 cows were combined to assess the effects of a protected B vitamin blend (BVB) upon milk yield (kg), fat percentage (%), protein %, fat yield (kg) and protein yield (kg) in primiparous and multiparous cows. Trials consisted of 3 test periods executed in the order control-test-control. No diet changes other than the inclusion of 3 grams/cow/ day of the BVB during the test period occurred. Means from the two control periods were compared to results obtained during the test period using a paired T test. Cows include in the analysis were between 45 and 300 days in milk (DIM) at the start of the experiment and were continuously available for all periods. The provision of the BVB resulted in increased (P < 0.05) milk, fat %, protein %, fat yield and protein yield. Regression models showed that the amount of milk produced had no effect upon the magnitude of the increase in milk components. The increase in milk was greatest in early lactation and declined with DIM. Protein and fat % increased with DIM in mature cows, but not in first lactation cows. Differences in fat yields between test and control feeding periods did not change with DIM, but the improvement in protein yield in mature cows declined with DIM. These results indicate that the BVB provided economically important advantages throughout lactation, but expected results would vary with cow age and stage of lactation.", "title": "" }, { "docid": "66c218bddb0bce210f8e0efa7bb457a7", "text": "The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.", "title": "" }, { "docid": "7a055093ac92c7d2fa7aa8dcbe47a8b8", "text": "In this paper, we present the design process of a smart bracelet that aims at enhancing the life of elderly people. The bracelet acts as a personal assistant during the user's everyday life, monitoring the health status and alerting him or her about abnormal conditions, reminding medications and facilitating the everyday life in many outdoor and indoor activities.", "title": "" }, { "docid": "c7a32821699ebafadb4c59e99fb3aa9e", "text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.", "title": "" }, { "docid": "60094e041c1be864ba8a636308b7ee12", "text": "This paper presents two chatbot systems, ALICE and Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the Dialogue Diversity Corpus to retrain a chatbot system with human dialogue examples. A Java program to convert from dialog transcript to AIML format provides a basic implementation of corpusbased chatbot training.. We conclude that dialogue researchers should adopt clearer standards for transcription and markup format in dialogue corpora to be used in training a chatbot system more effectively.", "title": "" }, { "docid": "5591d4842507a097e353c67c7d56262d", "text": "Reasoning about entities and their relationships from multimodal data is a key goal of Artificial General Intelligence. The visual question answering (VQA) problem is an excellent way to test such reasoning capabilities of an AI model and its multimodal representation learning. However, the current VQA models are oversimplified deep neural networks, comprised of a long short-term memory (LSTM) unit for question comprehension and a convolutional neural network (CNN) for learning single image representation. We argue that the single visual representation contains a limited and general information about the image contents and thus limits the model reasoning capabilities. In this work we introduce a modular neural network model that learns a multimodal and multifaceted representation of the image and the question. The proposed model learns to use the multimodal representation to reason about the image entities and achieves a new state-of-the-art performance on both VQA benchmark datasets, VQA v1.0 and v2.0, by a wide margin.", "title": "" }, { "docid": "ce5fc5fbb3cb0fb6e65ca530bfc097b1", "text": "The Bulgarian electricity market rules require from the transmission system operator, to procure electricity for covering transmission grid losses on hourly base before day-ahead gate closure. In this paper is presented a software solution for day-ahead forecasting of hourly transmission losses that is based on statistical approach of the impacting factors correlations and uses as inputs numerical weather predictions.", "title": "" }, { "docid": "8e2006ca72dbc6be6592e21418b7f3ba", "text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.", "title": "" }, { "docid": "0bc0e621c58a79a7455f0849ccf41a02", "text": "With the adoption of power electronic converters in shipboard power systems and associated novel fault management concepts, the ability to isolate electric faults quickly from the power system is becoming more important than breaking high magnitude fault currents and the corresponding arcing between opening contacts within a switch. This allows for the design of substantially faster, as well as potentially lighter and more compact, mechanical disconnect switches. Herein, we are proposing a new class of mechanical disconnect switches that utilize piezoelectric actuators to isolate within less than one millisecond. This technology may become a key enabler for future all-electric ships.", "title": "" }, { "docid": "14fb71b01f86008f0772eabd52ea747a", "text": "This paper introduces a positioning system for walking persons, called \"Personal Dead-reckoning\" (PDR) system. The PDR system does not require GPS, beacons, or landmarks. The system is therefore useful in GPS-denied environments, such as inside buildings, tunnels, or dense forests. Potential users of the system are military and security personnel as well as emergency responders. The PDR system uses a 6-DOF inertial measurement unit (IMU) attached to the user's boot. The IMU provides rate-of-rotation and acceleration measurements that are used in real-time to estimate the location of the user relative to a known starting point. In order to reduce the most significant errors of this IMU-based system-caused by the bias drift of the accelerometers-we implemented a technique known as \"Zero Velocity Update\" (ZUPT). With the ZUPT technique and related signal processing algorithms, typical errors of our system are about 2% of distance traveled for short walks. This typical PDR system error is largely independent of the gait or speed of the user. When walking continuously for several minutes, the error increases gradually beyond 2%. The PDR system works in both 2-dimensional (2-D) and 3-D environments, although errors in Z-direction are usually larger than 2% of distance traveled. Earlier versions of our system used an unpractically large IMU. In the most recent version we implemented a much smaller IMU. This paper discussed specific problems of this small IMU, our measures for eliminating these problems, and our first experimental results with the small IMU under different conditions.", "title": "" }, { "docid": "d041a5fc5f788b1abd8abf35a26cb5d2", "text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.", "title": "" }, { "docid": "602077b20a691854102946757da4b287", "text": "For three-dimensional (3D) ultrasound imaging, connecting elements of a two-dimensional (2D) transducer array to the imaging system's front-end electronics is a challenge because of the large number of array elements and the small element size. To compactly connect the transducer array with electronics, we flip-chip bond a 2D 16 times 16-element capacitive micromachined ultrasonic transducer (CMUT) array to a custom-designed integrated circuit (IC). Through-wafer interconnects are used to connect the CMUT elements on the top side of the array with flip-chip bond pads on the back side. The IC provides a 25-V pulser and a transimpedance preamplifier to each element of the array. For each of three characterized devices, the element yield is excellent (99 to 100% of the elements are functional). Center frequencies range from 2.6 MHz to 5.1 MHz. For pulse-echo operation, the average -6-dB fractional bandwidth is as high as 125%. Transmit pressures normalized to the face of the transducer are as high as 339 kPa and input-referred receiver noise is typically 1.2 to 2.1 rnPa/ radicHz. The flip-chip bonded devices were used to acquire 3D synthetic aperture images of a wire-target phantom. Combining the transducer array and IC, as shown in this paper, allows for better utilization of large arrays, improves receive sensitivity, and may lead to new imaging techniques that depend on transducer arrays that are closely coupled to IC electronics.", "title": "" }, { "docid": "427c5f5825ca06350986a311957c6322", "text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.", "title": "" }, { "docid": "b7ca3a123963bb2f0bfbe586b3bc63d0", "text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.", "title": "" }, { "docid": "6ab5678d7f4bcb0d686ca3f384381134", "text": "We present a TTS neural network that is able to produce speech in multiple languages. The proposed network is able to transfer a voice, which was presented as a sample in a source language, into one of several target languages. Training is done without using matching or parallel data, i.e., without samples of the same speaker in multiple languages, making the method much more applicable. The conversion is based on learning a polyglot network that has multiple perlanguage sub-networks and adding loss terms that preserve the speaker’s identity in multiple languages. We evaluate the proposed polyglot neural network for three languages with a total of more than 400 speakers and demonstrate convincing conversion capabilities.", "title": "" }, { "docid": "e2f2961ab8c527914c3d23f8aa03e4bf", "text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "title": "" }, { "docid": "796625110c6e97f4ff834cfe04c784fe", "text": "This paper addresses the large-scale visual font recognition (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content. Although visual font recognition has many practical applications, it has largely been neglected by the vision community. To address the VFR problem, we construct a large-scale dataset containing 2,420 font classes, which easily exceeds the scale of most image categorization datasets in computer vision. As font recognition is inherently dynamic and open-ended, i.e., new classes and data for existing categories are constantly added to the database over time, we propose a scalable solution based on the nearest class mean classifier (NCM). The core algorithm is built on local feature embedding, local feature metric learning and max-margin template selection, which is naturally amenable to NCM and thus to such open-ended classification problems. The new algorithm can generalize to new classes and new data at little added cost. Extensive experiments demonstrate that our approach is very effective on our synthetic test images, and achieves promising results on real world test images.", "title": "" } ]
scidocsrr
51f3961336efb81b85462a9fd239944b
A model for improved association of radar and camera objects in an indoor environment
[ { "docid": "8e18fa3850177d016a85249555621723", "text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.", "title": "" } ]
[ { "docid": "00eeceba7118e7a8a2f68deadc612f14", "text": "I n the growing fields of wearable robotics, rehabilitation robotics, prosthetics, and walking robots, variable stiffness actuators (VSAs) or adjustable compliant actuators are being designed and implemented because of their ability to minimize large forces due to shocks, to safely interact with the user, and their ability to store and release energy in passive elastic elements. This review article describes the state of the art in the design of actuators with adaptable passive compliance. This new type of actuator is not preferred for classical position-controlled applications such as pick and place operations but is preferred in novel robots where safe human– robot interaction is required or in applications where energy efficiency must be increased by adapting the actuator’s resonance frequency. The working principles of the different existing designs are explained and compared. The designs are divided into four groups: equilibrium-controlled stiffness, antagonistic-controlled stiffness, structure-controlled stiffness (SCS), and mechanically controlled stiffness. In classical robotic applications, actuators are preferred to be as stiff as possible to make precise position movements or trajectory tracking control easier (faster systems with high bandwidth). The biological counterpart is the muscle that has superior functional performance and a neuromechanical control system that is much more advanced at adapting and tuning its parameters. The superior power-to-weight ratio, force-toweight ratio, compliance, and control of muscle, when compared with traditional robotic actuators, are the main barriers for the development of machines that can match the motion, safety, and energy efficiency of human or other animals. One of the key differences of these systems is the compliance or springlike behavior found in biological systems [1]. Although such compliant", "title": "" }, { "docid": "b910de28ecbfa82713b30f5918eaae80", "text": "Raman microscopy is a non-destructive technique requiring minimal sample preparation that can be used to measure the chemical properties of the mineral and collagen parts of bone simultaneously. Modern Raman instruments contain the necessary components and software to acquire the standard information required in most bone studies. The spatial resolution of the technique is about a micron. As it is non-destructive and small samples can be used, it forms a useful part of a bone characterisation toolbox.", "title": "" }, { "docid": "a84ee8a0f06e07abd53605bf5b542519", "text": "Abeta peptide accumulation is thought to be the primary event in the pathogenesis of Alzheimer's disease (AD), with downstream neurotoxic effects including the hyperphosphorylation of tau protein. Glycogen synthase kinase-3 (GSK-3) is increasingly implicated as playing a pivotal role in this amyloid cascade. We have developed an adult-onset Drosophila model of AD, using an inducible gene expression system to express Arctic mutant Abeta42 specifically in adult neurons, to avoid developmental effects. Abeta42 accumulated with age in these flies and they displayed increased mortality together with progressive neuronal dysfunction, but in the apparent absence of neuronal loss. This fly model can thus be used to examine the role of events during adulthood and early AD aetiology. Expression of Abeta42 in adult neurons increased GSK-3 activity, and inhibition of GSK-3 (either genetically or pharmacologically by lithium treatment) rescued Abeta42 toxicity. Abeta42 pathogenesis was also reduced by removal of endogenous fly tau; but, within the limits of detection of available methods, tau phosphorylation did not appear to be altered in flies expressing Abeta42. The GSK-3-mediated effects on Abeta42 toxicity appear to be at least in part mediated by tau-independent mechanisms, because the protective effect of lithium alone was greater than that of the removal of tau alone. Finally, Abeta42 levels were reduced upon GSK-3 inhibition, pointing to a direct role of GSK-3 in the regulation of Abeta42 peptide level, in the absence of APP processing. Our study points to the need both to identify the mechanisms by which GSK-3 modulates Abeta42 levels in the fly and to determine if similar mechanisms are present in mammals, and it supports the potential therapeutic use of GSK-3 inhibitors in AD.", "title": "" }, { "docid": "ceb270c07d26caec5bc20e7117690f9f", "text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].", "title": "" }, { "docid": "16f75bcd060ae7a7b6f7c9c8412ca479", "text": "Deep neural networks (DNNs) are powerful machine learning models and have succeeded in various artificial intelligence tasks. Although various architectures and modules for the DNNs have been proposed, selecting and designing the appropriate network structure for a target problem is a challenging task. In this paper, we propose a method to simultaneously optimize the network structure and weight parameters during neural network training. We consider a probability distribution that generates network structures, and optimize the parameters of the distribution instead of directly optimizing the network structure. The proposed method can apply to the various network structure optimization problems under the same framework. We apply the proposed method to several structure optimization problems such as selection of layers, selection of unit types, and selection of connections using the MNIST, CIFAR-10, and CIFAR-100 datasets. The experimental results show that the proposed method can find the appropriate and competitive network structures.", "title": "" }, { "docid": "ac9f71a97f6af0718587ffd0ea92d31d", "text": "Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Finegrained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack. ACM Reference Format: Jed Liu, Joe Corbett-Davies, Andrew Ferraiuolo, Alexander Ivanov, Mulong Luo, G. Edward Suh, Andrew C. Myers, and Mark Campbell. 2018. Secure Autonomous Cyber-Physical Systems Through Verifiable Information Flow Control. InWorkshop on Cyber-Physical Systems Security & Privacy (CPS-SPC ’18), October 19, 2018, Toronto, ON, Canada. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3264888.3264889", "title": "" }, { "docid": "0afd0f70859772054e589a2256efeba4", "text": "Hair is typically modeled and rendered using either explicitly defined hair strand geometry or a volume texture of hair densities. Taken each on their own, these two hair representations have difficulties in the case of animal fur as it consists of very dense and thin undercoat hairs in combination with coarse guard hairs. Explicit hair strand geometry is not well-suited for the undercoat hairs, while volume textures are not well-suited for the guard hairs. To efficiently model and render both guard hairs and undercoat hairs, we present a hybrid technique that combines rasterization of explicitly defined guard hairs with ray marching of a prismatic shell volume with dynamic resolution. The latter is the key to practical combination of the two techniques, and it also enables a high degree of detail in the undercoat. We demonstrate that our hybrid technique creates a more detailed and soft fur appearance as compared with renderings that only use explicitly defined hair strands. Finally, our rasterization approach is based on order-independent transparency and renders high-quality fur images in seconds.", "title": "" }, { "docid": "ab70c8814c0e15695c8142ce8aad69bc", "text": "Domain-oriented dialogue systems are often faced with users that try to cross the limits of their knowledge, by unawareness of its domain limitations or simply to test their capacity. These interactions are considered to be Out-Of-Domain and several strategies can be found in the literature to deal with some specific situations. Since if a certain input appears once, it has a non-zero probability of being entered later, the idea of taking advantage of real human interactions to feed these dialogue systems emerges, thus, naturally. In this paper, we introduce the SubTle Corpus, a corpus of Interaction-Response pairs extracted from subtitles files, created to help dialogue systems to deal with Out-of-Domain interactions.", "title": "" }, { "docid": "d75ebc4041927b525d8f4937c760518e", "text": "Most current term frequency normalization approaches for information retrieval involve the use of parameters. The tuning of these parameters has an important impact on the overall performance of the information retrieval system. Indeed, a small variation in the involved parameter(s) could lead to an important variation in the precision/recall values. Most current tuning approaches are dependent on the document collections. As a consequence, the effective parameter value cannot be obtained for a given new collection without extensive training data. In this paper, we propose a novel and robust method for the tuning of term frequency normalization parameter(s), by measuring the normalization effect on the within document frequency of the query terms. As an illustration, we apply our method on Amati \\& Van Rijsbergen's so-called normalization 2. The experiments for the ad-hoc TREC-6,7,8 tasks and TREC-8,9,10 Web tracks show that the new method is independent of the collections and able to provide reliable and good performance.", "title": "" }, { "docid": "ee82b52d5a0bc28a0a8e78e09da09340", "text": "AIMS\nExcessive internet use is becoming a concern, and some have proposed that it may involve addiction. We evaluated the dimensions assessed by, and psychometric properties of, a range of questionnaires purporting to assess internet addiction.\n\n\nMETHODS\nFourteen questionnaires were identified purporting to assess internet addiction among adolescents and adults published between January 1993 and October 2011. Their reported dimensional structure, construct, discriminant and convergent validity and reliability were assessed, as well as the methods used to derive these.\n\n\nRESULTS\nMethods used to evaluate internet addiction questionnaires varied considerably. Three dimensions of addiction predominated: compulsive use (79%), negative outcomes (86%) and salience (71%). Less common were escapism (21%), withdrawal symptoms (36%) and other dimensions. Measures of validity and reliability were found to be within normally acceptable limits.\n\n\nCONCLUSIONS\nThere is a broad convergence of questionnaires purporting to assess internet addiction suggesting that compulsive use, negative outcome and salience should be covered and the questionnaires show adequate psychometric properties. However, the methods used to evaluate the questionnaires vary widely and possible factors contributing to excessive use such as social motivation do not appear to be covered.", "title": "" }, { "docid": "ad8a727d0e3bd11cd972373451b90fe7", "text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.", "title": "" }, { "docid": "b160d69d87ad113286ee432239b090d7", "text": "Isogeometric analysis has been proposed as a methodology for bridging the gap between computer aided design (CAD) and finite element analysis (FEA). Although both the traditional and isogeometric pipelines rely upon the same conceptualization to solid model steps, they drastically differ in how they bring the solid model both to and through the analysis process. The isogeometric analysis process circumvents many of the meshing pitfalls experienced by the traditional pipeline by working directly within the approximation spaces used by the model representation. In this paper, we demonstrate that in a similar way as how mesh quality is used in traditional FEA to help characterize the impact of the mesh on analysis, an analogous concept of model quality exists within isogeometric analysis. The consequence of these observations is the need for a new area within modeling – analysis-aware modeling – in which model properties and parameters are selected to facilitate isogeometric analysis. ! 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "dfbf5c12d8e5a8e5e81de5d51f382185", "text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.", "title": "" }, { "docid": "750c67fe63611248e8d8798a42ac282c", "text": "Chaos and its drive-response synchronization for a fractional-order cellular neural networks (CNN) are studied. It is found that chaos exists in the fractional-order system with six-cell. The phase synchronisation of drive and response chaotic trajectories is investigated after that. These works based on Lyapunov exponents (LE), Lyapunov stability theory and numerical solving fractional-order system in Matlab environment.", "title": "" }, { "docid": "cfaf2c04cd06103489ac60d00a70cd2c", "text": "BACKGROUND\nΔ(9)-Tetrahydrocannabinol (THC), 11-nor-9-carboxy-THC (THCCOOH), and cannabinol (CBN) were measured in breath following controlled cannabis smoking to characterize the time course and windows of detection of breath cannabinoids.\n\n\nMETHODS\nExhaled breath was collected from chronic (≥4 times per week) and occasional (<twice per week) smokers before and after smoking a 6.8% THC cigarette. Sample analysis included methanol extraction from breath pads, solid-phase extraction, and liquid chromatography-tandem mass spectrometry quantification.\n\n\nRESULTS\nTHC was the major cannabinoid in breath; no sample contained THCCOOH and only 1 contained CBN. Among chronic smokers (n = 13), all breath samples were positive for THC at 0.89 h, 76.9% at 1.38 h, and 53.8% at 2.38 h, and only 1 sample was positive at 4.2 h after smoking. Among occasional smokers (n = 11), 90.9% of breath samples were THC-positive at 0.95 h and 63.6% at 1.49 h. One occasional smoker had no detectable THC. Analyte recovery from breath pads by methanolic extraction was 84.2%-97.4%. Limits of quantification were 50 pg/pad for THC and CBN and 100 pg/pad for THCCOOH. Solid-phase extraction efficiency was 46.6%-52.1% (THC) and 76.3%-83.8% (THCCOOH, CBN). Matrix effects were -34.6% to 12.3%. Cannabinoids fortified onto breath pads were stable (≤18.2% concentration change) for 8 h at room temperature and -20°C storage for 6 months.\n\n\nCONCLUSIONS\nBreath may offer an alternative matrix for identifying recent driving under the influence of cannabis, but currently sensitivity is limited to a short detection window (0.5-2 h).", "title": "" }, { "docid": "599c2f4205f3a0978d0567658daf8be6", "text": "With increasing audio/video service consumption through unmanaged IP networks, HTTP adaptive streaming techniques have emerged to handle bandwidth limitations and variations. But while it is becoming common to serve multiple clients in one home network, these solutions do not adequately address fine tuned quality arbitration between the multiple streams. While clients compete for bandwidth, the video suffers unstable conditions and/or inappropriate bit-rate levels.\n We hereby experiment a mechanism based on traffic chapping that allow bandwidth arbitration to be implemented in the home gateway, first determining desirable target bit-rates to be reached by each stream and then constraining the clients to stay within their limits. This enables the delivery of optimal quality of experience to the maximum number of users. This approach is validated through experimentation, and results are shown through a set of objective measurement criteria.", "title": "" }, { "docid": "7f73952f3dfb445fd700d951a013595e", "text": "Although parallel and convergent evolution are discussed extensively in technical articles and textbooks, their meaning can be overlapping, imprecise, and contradictory. The meaning of parallel evolution in much of the evolutionary literature grapples with two separate hypotheses in relation to phenotype and genotype, but often these two hypotheses have been inferred from only one hypothesis, and a number of subsidiary but problematic criteria, in relation to the phenotype. However, examples of parallel evolution of genetic traits that underpin or are at least associated with convergent phenotypes are now emerging. Four criteria for distinguishing parallelism from convergence are reviewed. All are found to be incompatible with any single proposition of homoplasy. Therefore, all homoplasy is equivalent to a broad view of convergence. Based on this concept, all phenotypic homoplasy can be described as convergence and all genotypic homoplasy as parallelism, which can be viewed as the equivalent concept of convergence for molecular data. Parallel changes of molecular traits may or may not be associated with convergent phenotypes but if so describe homoplasy at two biological levels-genotype and phenotype. Parallelism is not an alternative to convergence, but rather it entails homoplastic genetics that can be associated with and potentially explain, at the molecular level, how convergent phenotypes evolve.", "title": "" }, { "docid": "d59d1ac7b3833ee1e60f7179a4a9af99", "text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.", "title": "" }, { "docid": "b3d1780cb8187e5993c5adbb7959b7a6", "text": "We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.", "title": "" }, { "docid": "c7b7ca49ea887c25b05485e346b5b537", "text": "I n our last article 1 we described the external features which characterize the cranial and facial structures of the cranial strains known as hyperflexion and hyperextension. To understand how these strains develop we have to examine the anatomical relations underlying all cranial patterns. Each strain represent a variation on a theme. By studying the features in common, it is possible to account for the facial and dental consequences of these variations. The key is the spheno-basilar symphysis and the displacements which can take place between the occiput and the sphenoid at that suture. In hyperflexion there is shortening of the cranium in an antero-posterior direction with a subsequent upward buckling of the spheno-basilar symphysis (Figure 1). In children, where the cartilage of the joint has not ossified, a v-shaped wedge can be seen occasionally on the lateral skull radiograph (Figure 2). Figure (3a) is of the cranial base seen from a vertex viewpoint. By leaving out the temporal bones the connection between the centrally placed spheno-basilar symphysis and the peripheral structures of the cranium can be seen more easily. Sutherland realized that the cranium could be divided into quadrants (Figure 3b) centered on the spheno-basilar symphysis and that what happens in each quadrant is directly influenced by the spheno-basilar symphysis. He noted that accompanying the vertical changes at the symphysis there are various lateral displacements. As the peripheral structures move laterally, this is known as external rotation. If they move closer to the midline, this is called internal rotation. It is not unusual to have one side of the face externally rotated and the other side internally rotated (Figure 4a). This can have a significant effect in the mouth, giving rise to asymmetries (Figure 4b). This shows a palatal view of the maxilla with the left posterior dentition externally rotated and the right buccal posterior segment internally rotated, reflecting the internal rotation of the whole right side of the face. This can be seen in hyperflexion but also other strains. With this background, it is now appropriate to examine in detail the cranial strain known as hyperflexion. As its name implies, it is brought about by an exaggeration of the flexion/ extension movement of the cranium into flexion. Rhythmic movement of the cranium continues despite the displacement into flexion, but it does so more readily into flexion than extension. As the skull is shortened in an antero-posterior plane, it is widened laterally. Figures 3a and 3b. 3a: cranial base from a vertex view (temporal bones left out). 3b: Sutherland’s quadrants imposed on cranial base. Figure 2. Lateral Skull Radiograph of Hyperflexion patient. Note V-shaped wedge at superior border of the spheno-basillar symphysis. Figure 1. Movement of Occiput and Sphenold in Hyperflexion. Reprinted from Orthopedic Gnathology, Hockel, J., Ed. 1983. With permission from Quintessence Publishing Co.", "title": "" } ]
scidocsrr
0d6b58df08c2956b073151fe580781ed
Low-Rank Modeling and Its Applications in Image Analysis
[ { "docid": "783d7251658f9077e05a7b1b9bd60835", "text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.", "title": "" } ]
[ { "docid": "ee5729a9ec24fbb951076a43d4945e8e", "text": "Enhancing the performance of emotional speaker recognition process has witnessed an increasing interest in the last years. This paper highlights a methodology for speaker recognition under different emotional states based on the multiclass Support Vector Machine (SVM) classifier. We compare two feature extraction methods which are used to represent emotional speech utterances in order to obtain best accuracies. The first method known as traditional Mel-Frequency Cepstral Coefficients (MFCC) and the second one is MFCC combined with Shifted-Delta-Cepstra (MFCC-SDC). Experimentations are conducted on IEMOCAP database using two multiclass SVM approaches: One-Against-One (OAO) and One Against-All (OAA). Obtained results show that MFCC-SDC features outperform the conventional MFCC. Keywords—Emotion; Speaker recognition; Mel Frequency Cepstral Coefficients (MFCC); Shifted-Delta-Cepstral (SDC); SVM", "title": "" }, { "docid": "f5a8d2d7ea71fa5444cc1594dc0cf5ab", "text": "Radar sensors operating in the 76–81 GHz range are considered key for Advanced Driver Assistance Systems (ADAS) like adaptive cruise control (ACC), collision mitigation and avoidance systems (CMS) or lane change assist (LCA). These applications are the next wave in automotive safety systems and have thus generated increased interest in lower-cost solutions especially for the mm-wave front-end (FE) section. Today, most of the radar sensors in this frequency range use GaAs based FEs. These multi-chip GaAs FEs are a main cost driver in current radar sensors due to their low integration level. The step towards monolithic microwave integrated circuits (MMIC) based on a 200 GHz ft silicon-germanium (SiGe) technology integrating all needed RF building blocks (mixers, VCOs, dividers, buffers, PAs) on an single die does not only lead to cost reductions but also benefits the testability of these MMICs. This is especially important in the light of upcoming functional safety standards like ASIL-D and ISO26262.", "title": "" }, { "docid": "fd1e327327068a1373e35270ef257c59", "text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.", "title": "" }, { "docid": "38a74fff83d3784c892230255943ee23", "text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.", "title": "" }, { "docid": "f366e1378b86e7fbed2252754502cf59", "text": "Multilabel learning deals with data associated with multiple labels simultaneously. Like other data mining and machine learning tasks, multilabel learning also suffers from the curse of dimensionality. Dimensionality reduction has been studied for many years, however, multilabel dimensionality reduction remains almost untouched. In this article, we propose a multilabel dimensionality reduction method, MDDM, with two kinds of projection strategies, attempting to project the original data into a lower-dimensional feature space maximizing the dependence between the original feature description and the associated class labels. Based on the Hilbert-Schmidt Independence Criterion, we derive a eigen-decomposition problem which enables the dimensionality reduction process to be efficient. Experiments validate the performance of MDDM.", "title": "" }, { "docid": "37936de50a1d3fa8612a465b6644c282", "text": "Nature uses a limited, conservative set of amino acids to synthesize proteins. The ability to genetically encode an expanded set of building blocks with new chemical and physical properties is transforming the study, manipulation and evolution of proteins, and is enabling diverse applications, including approaches to probe, image and control protein function, and to precisely engineer therapeutics. Underpinning this transformation are strategies to engineer and rewire translation. Emerging strategies aim to reprogram the genetic code so that noncanonical biopolymers can be synthesized and evolved, and to test the limits of our ability to engineer the translational machinery and systematically recode genomes.", "title": "" }, { "docid": "714b5db0d1f146c5dde6e4c01de59be9", "text": "Coilgun electromagnetic launchers have capability for low and high speed applications. Through the development of four guns having projectiles ranging from 10 g to 5 kg and speeds up to 1 km/s, Sandia National Laboratories has succeeded in coilgun design and operations, validating the computational codes and basis for gun system control. Coilguns developed at Sandia consist of many coils stacked end-to-end forming a barrel, with each coil energized in sequence to create a traveling magnetic wave that accelerates a projectile. Active tracking of the projectile location during launch provides precise feedback to control when the coils arc triggered to create this wave. However, optimum performance depends also on selection of coil parameters. This paper discusses issues related to coilgun design and control such as tradeoffs in geometry and circuit parameters to achieve the necessary current risetime to establish the energy in the coils. The impact of switch jitter on gun performance is also assessed for high-speed applications.", "title": "" }, { "docid": "81672984e2d94d7a06ffe930136647a3", "text": "Social network sites provide the opportunity for bu ilding and maintaining online social network groups around a specific interest. Despite the increasing use of social networks in higher education, little previous research has studied their impacts on stud en ’s engagement and on their perceived educational outcomes. This research investigates the impact of instructors’ self-disclosure and use of humor via course-based social networks as well as their credi bility, and the moderating impact of time spent in hese course-based social networks, on the students’ enga g ment in course-based social networks. The researc h provides a theoretical viewpoint, supported by empi rical evidence, on the impact of students’ engageme nt in course-based social networks on their perceived educational outcomes. The findings suggest that instructors who create course-based online social n etworks to communicate with their students can increase their engagement, motivation, and satisfac on. We conclude the paper by suggesting the theoretical implications for the study and by provi ding strategies for instructors to adjust their act ivities in order to succeed in improving their students’ engag ement and educational outcomes.", "title": "" }, { "docid": "9889cb9ae08cd177e6fa55c3ae7b8831", "text": "Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.", "title": "" }, { "docid": "ce0ba4696c26732ac72b346f72af7456", "text": "OBJECTIVE\nThe purpose of this study was to examine the relationship between two forms of helping behavior among older adults--informal caregiving and formal volunteer activity.\n\n\nMETHODS\nTo evaluate our hypotheses, we employed Tobit regression models to analyze panel data from the first two waves of the Americans' Changing Lives survey.\n\n\nRESULTS\nWe found that older adult caregivers were more likely to be volunteers than noncaregivers. Caregivers who provided a relatively high number of caregiving hours annually reported a greater number of volunteer hours than did noncaregivers. Caregivers who provided care to nonrelatives were more likely than noncaregivers to be a volunteer and to volunteer more hours. Finally, caregivers were more likely than noncaregivers to be asked to volunteer.\n\n\nDISCUSSION\nOur results provide support for the hypothesis that caregivers are embedded in networks that provide them with more opportunities for volunteering. Additional research on the motivations for volunteering and greater attention to the context and hierarchy of caregiving and volunteering are needed.", "title": "" }, { "docid": "e3c0073428eb554c1341b5ba3af3918a", "text": "Technological Pedagogical Content Knowledge (TPACK) has been introduced as a conceptual framework for the knowledge base teachers need to effectively teach with technology. The framework stems from the notion that technology integration in a specific educational context benefits from a careful alignment of content, pedagogy and the potential of technology, and that teachers who want to integrate technology in their teaching practice therefore need to be competent in all three domains. This study is a systematic literature review about TPACK of 55 peer-reviewed journal articles (and one book chapter), published between 2005 and 2011. The purpose of the review was to investigate the theoretical basis and the practical use of TPACK. Findings showed different understandings of TPACK and of technological knowledge. Implications of these different views impacted the way TPACK was measured. Notions about TPACK in subject domains were hardly found in the studies selected for this review. Teacher knowledge (TPACK) and beliefs about pedagogy and technology are intertwined. Both determine whether a teacher decides to teach with technology. Active involvement in (re)design and enactment of technology-enhanced lessons was found as a promising strategy for the development of TPACK in (student-)teachers. Future directions for research are discussed.", "title": "" }, { "docid": "aca8b1efb729bdc45f5363cb663dba74", "text": "Along with the burst of open source projects, software theft (or plagiarism) has become a very serious threat to the healthiness of software industry. Software birthmark, which represents the unique characteristics of a program, can be used for software theft detection. We propose a system call dependence graph based software birthmark called SCDG birthmark, and examine how well it reflects unique behavioral characteristics of a program. To our knowledge, our detection system based on SCDG birthmark is the first one that is capable of detecting software component theft where only partial code is stolen. We demonstrate the strength of our birthmark against various evasion techniques, including those based on different compilers and different compiler optimization levels as well as two state-of-the-art obfuscation tools. Unlike the existing work that were evaluated through small or toy software, we also evaluate our birthmark on a set of large software. Our results show that SCDG birthmark is very practical and effective in detecting software theft that even adopts advanced evasion techniques.", "title": "" }, { "docid": "c9c4ed4a7e8e6ef8ca2bcf146001d2e5", "text": "Microblogging services such as Twitter are said to have the potential for increasing political participation. Given the feature of 'retweeting' as a simple yet powerful mechanism for information diffusion, Twitter is an ideal platform for users to spread not only information in general but also political opinions through their networks as Twitter may also be used to publicly agree with, as well as to reinforce, someone's political opinions or thoughts. Besides their content and intended use, Twitter messages ('tweets') also often convey pertinent information about their author's sentiment. In this paper, we seek to examine whether sentiment occurring in politically relevant tweets has an effect on their retweetability (i.e., how often these tweets will be retweeted). Based on a data set of 64,431 political tweets, we find a positive relationship between the quantity of words indicating affective dimensions, including positive and negative emotions associated with certain political parties or politicians, in a tweet and its retweet rate. Furthermore, we investigate how political discussions take place in the Twitter network during periods of political elections with a focus on the most active and most influential users. Finally, we conclude by discussing the implications of our results.", "title": "" }, { "docid": "5df3346cb96403ee932428d159ad342e", "text": "Nearly 40% of mortality in the United States is linked to social and behavioral factors such as smoking, diet and sedentary lifestyle. Autonomous self-regulation of health-related behaviors is thus an important aspect of human behavior to assess. In 1997, the Behavior Change Consortium (BCC) was formed. Within the BCC, seven health behaviors, 18 theoretical models, five intervention settings and 26 mediating variables were studied across diverse populations. One of the measures included across settings and health behaviors was the Treatment Self-Regulation Questionnaire (TSRQ). The purpose of the present study was to examine the validity of the TSRQ across settings and health behaviors (tobacco, diet and exercise). The TSRQ is composed of subscales assessing different forms of motivation: amotivation, external, introjection, identification and integration. Data were obtained from four different sites and a total of 2731 participants completed the TSRQ. Invariance analyses support the validity of the TSRQ across all four sites and all three health behaviors. Overall, the internal consistency of each subscale was acceptable (most alpha values >0.73). The present study provides further evidence of the validity of the TSRQ and its usefulness as an assessment tool across various settings and for different health behaviors.", "title": "" }, { "docid": "26d7cf1e760e9e443f33ebd3554315b6", "text": "The arrival of a multinational corporation often looks like a death sentence to local companies in an emerging market. After all, how can they compete in the face of the vast financial and technological resources, the seasoned management, and the powerful brands of, say, a Compaq or a Johnson & Johnson? But local companies often have more options than they might think, say the authors. Those options vary, depending on the strength of globalization pressures in an industry and the nature of a company's competitive assets. In the worst case, when globalization pressures are strong and a company has no competitive assets that it can transfer to other countries, it needs to retreat to a locally oriented link within the value chain. But if globalization pressures are weak, the company may be able to defend its market share by leveraging the advantages it enjoys in its home market. Many companies in emerging markets have assets that can work well in other countries. Those that operate in industries where the pressures to globalize are weak may be able to extend their success to a limited number of other markets that are similar to their home base. And those operating in global markets may be able to contend head-on with multinational rivals. By better understanding the relationship between their company's assets and the industry they operate in, executives from emerging markets can gain a clearer picture of the options they really have when multinationals come to stay.", "title": "" }, { "docid": "adaab9f6e0355af12f4058a350076f87", "text": "Recently, the fusion of hyperspectral and light detection and ranging (LiDAR) data has obtained a great attention in the remote sensing community. In this paper, we propose a new feature fusion framework using deep neural network (DNN). The proposed framework employs a novel 3D convolutional neural network (CNN) to extract the spectral-spatial features of hyperspectral data, a deep 2D CNN to extract the elevation features of LiDAR data, and then a fully connected deep neural network to fuse the extracted features in the previous CNNs. Through the aforementioned three deep networks, one can extract the discriminant and invariant features of hyperspectral and LiDAR data. At last, logistic regression is used to produce the final classification results. The experimental results reveal that the proposed deep fusion model provides competitive results. Furthermore, the proposed deep fusion idea opens a new window for future research.", "title": "" }, { "docid": "b83eb2f78c4b48cf9b1ca07872d6ea1a", "text": "Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated mid-dleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.", "title": "" }, { "docid": "078ba976d84d15da757f3f5e165927d9", "text": "Evolutionary algorithms often have to solve optimization problems in the presence of a wide range of uncertainties. Generally, uncertainties in evolutionary computation can be divided into the following four categories. First, the fitness function is noisy. Second, the design variables and/or the environmental parameters may change after optimization, and the quality of the obtained optimal solution should be robust against environmental changes or deviations from the optimal point. Third, the fitness function is approximated, which means that the fitness function suffers from approximation errors. Fourth, the optimum of the problem to be solved changes over time and, thus, the optimizer should be able to track the optimum continuously. In all these cases, additional measures must be taken so that evolutionary algorithms are still able to work satisfactorily. This paper attempts to provide a comprehensive overview of the related work within a unified framework, which has been scattered in a variety of research areas. Existing approaches to addressing different uncertainties are presented and discussed, and the relationship between the different categories of uncertainties are investigated. Finally, topics for future research are suggested.", "title": "" }, { "docid": "c4094c8b273d6332f36b6f452886de6a", "text": "This paper presents original research on prevalence, user characteristics and effect profile of N,N-dimethyltryptamine (DMT), a potent hallucinogenic which acts primarily through the serotonergic system. Data were obtained from the Global Drug Survey (an anonymous online survey of people, many of whom have used drugs) conducted between November and December 2012 with 22,289 responses. Lifetime prevalence of DMT use was 8.9% (n=1980) and past year prevalence use was 5.0% (n=1123). We explored the effect profile of DMT in 472 participants who identified DMT as the last new drug they had tried for the first time and compared it with ratings provided by other respondents on psilocybin (magic mushrooms), LSD and ketamine. DMT was most often smoked and offered a strong, intense, short-lived psychedelic high with relatively few negative effects or \"come down\". It had a larger proportion of new users compared with the other substances (24%), suggesting its popularity may increase. Overall, DMT seems to have a very desirable effect profile indicating a high abuse liability that maybe offset by a low urge to use more.", "title": "" }, { "docid": "4b5b09ee38c87fdf7031f90530460d81", "text": "With the increasing adoption of Web Services and service-oriented computing paradigm, matchmaking of web services with the request has become a significant task. This warrants the need to establish an effective and reliable Web Service discovery. Here reducing the service discovery time and increasing the quality of discovery are key issues. This paper proposes a new semantic Web Service discovery scheme where the similarity between the query and service is decided using the WSDL specification and ontology, and the improved Hungarian algorithm is applied to quickly find the maximum match. The proposed approach utilizes the structure of datatype and operation, and natural language description used for information retrieval. Computer simulation reveals that the proposed scheme substantially increases the quality of service discovery compared to the existing schemes in terms of precision, recall rate, and F-measure. Moreover, the proposed scheme allows consistently smaller discovery time, while the improvement gets more significant as the number of compared parameters increases.", "title": "" } ]
scidocsrr
2ea6466de9702c55fb87df541947b9d0
Searching by Talking: Analysis of Voice Queries on Mobile Web Search
[ { "docid": "ef08ef786fd759b33a7d323c69be19db", "text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.", "title": "" } ]
[ { "docid": "f4abfe0bb969e2a6832fa6317742f202", "text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.", "title": "" }, { "docid": "b0c60343724a49266fac2d2f4c2d37d3", "text": "In the Western world, aging is a growing problem of the society and computer assisted treatments can facilitate the telemedicine for old people or it can help in rehabilitations of patients after sport accidents in far locations. Physical exercises play an important role in physiotherapy and RGB-D devices can be utilized to recognize them in order to make interactive computer healthcare applications in the future. A practical model definition is introduced in this paper to recognize different exercises with Asus Xtion camera. One of the contributions is the extendable recognition models to detect other human activities with noisy sensors, but avoiding heavy data collection. The experiments show satisfactory detection performance without any false positives which is unique in the field to the best of the author knowledge. The computational costs are negligible thus the developed models can be suitable for embedded systems.", "title": "" }, { "docid": "d7bb22eefbff0a472d3e394c61788be2", "text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ca9c4512d2258a44590a298879219970", "text": "I propose a common framework that combines three different paradigms in machine learning: generative, discriminative and imitative learning. A generative probabilistic distribution is a principled way to model many machine learning and machine perception problems. Therein, one provides domain specific knowledge in terms of structure and parameter priors over the joint space of variables. Bayesian networks and Bayesian statistics provide a rich and flexible language for specifying this knowledge and subsequently refining it with data and observations. The final result is a distribution that is a good generator of novel exemplars. Conversely, discriminative algorithms adjust a possibly non-distributional model to data optimizing for a specific task, such as classification or prediction. This typically leads to superior performance yet compromises the flexibility of generative modeling. I present Maximum Entropy Discrimination (MED) as a framework to combine both discriminative estimation and generative probability densities. Calculations involve distributions over parameters, margins, and priors and are provably and uniquely solvable for the exponential family. Extensions include regression, feature selection, and transduction. SVMs are also naturally subsumed and can be augmented with, for example, feature selection, to obtain substantial improvements. To extend to mixtures of exponential families, I derive a discriminative variant of the ExpectationMaximization (EM) algorithm for latent discriminative learning (or latent MED). While EM and Jensen lower bound log-likelihood, a dual upper bound is made possible via a novel reverse-Jensen inequality. The variational upper bound on latent log-likelihood has the same form as EM bounds, is computable efficiently and is globally guaranteed. It permits powerful discriminative learning with the wide range of contemporary probabilistic mixture models (mixtures of Gaussians, mixtures of multinomials and hidden Markov models). We provide empirical results on standardized data sets that demonstrate the viability of the hybrid discriminative-generative approaches of MED and reverse-Jensen bounds over state of the art discriminative techniques or generative approaches. Subsequently, imitative learning is presented as another variation on generative modeling which also learns from exemplars from an observed data source. However, the distinction is that the generative model is an agent that is interacting in a much more complex surrounding external world. It is not efficient to model the aggregate space in a generative setting. I demonstrate that imitative learning (under appropriate conditions) can be adequately addressed as a discriminative prediction task which outperforms the usual generative approach. This discriminative-imitative learning approach is applied with a generative perceptual system to synthesize a real-time agent that learns to engage in social interactive behavior. Thesis Supervisor: Alex Pentland Title: Toshiba Professor of Media Arts and Sciences, MIT Media Lab Discriminative, Generative and Imitative Learning", "title": "" }, { "docid": "9584909fc62cca8dc5c9d02db7fa7e5d", "text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.", "title": "" }, { "docid": "4cc4c8fd07f30b5546be2376c1767c19", "text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.", "title": "" }, { "docid": "8c174dbb8468b1ce6f4be3676d314719", "text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.", "title": "" }, { "docid": "8af2e53cb3f77a2590945f135a94279b", "text": "Time series data are an ubiquitous and important data source in many domains. Most companies and organizations rely on this data for critical tasks like decision-making, planning, and analytics in general. Usually, all these tasks focus on actual data representing organization and business processes. In order to assess the robustness of current systems and methods, it is also desirable to focus on time-series scenarios which represent specific time-series features. This work presents a generally applicable and easy-to-use method for the feature-driven generation of time series data. Our approach extracts descriptive features of a data set and allows the construction of a specific version by means of the modification of these features.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" }, { "docid": "1785d1d7da87d1b6e5c41ea89e447bf9", "text": "Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given.", "title": "" }, { "docid": "924768b271caa9d1ba0cb32ab512f92e", "text": "Traditional keyboard and mouse based presentation prevents lecturers from interacting with the audiences freely and closely. In this paper, we propose a gesture-aware presentation tool named SlideShow to liberate lecturers from physical space constraints and make human-computer interaction more natural and convenient. In our system, gesture data is obtained by a handle controller with 3-axis accelerometer and gyro and transmitted to host-side through bluetooth, then we use Bayesian change point detection to segment continuous gesture series and HMM to recognize the gesture. In consequence Slideshow could carry out the corresponding operations on PowerPoint(PPT) to make a presentation, and operation states can be switched automatically and intelligently during the presentation. Both the experimental and testing results show our approach is practical, useful and convenient.", "title": "" }, { "docid": "d2f64c21d0a3a54b4a2b75b7dd7df029", "text": "Library of Congress Cataloging in Publication Data EB. Boston studies in the philosophy of science.The concept of autopoiesis is due to Maturana and Varela 8, 9. The aim of this article is to revisit the concepts of autopoiesis and cognition in the hope of.Amazon.com: Autopoiesis and Cognition: The Realization of the Living Boston Studies in the Philosophy of Science, Vol. 42 9789027710161: H.R. Maturana.Autopoiesis, The Santiago School of Cognition, and. In their early work together Maturana and Varela developed the idea of autopoiesis.Autopoiesis and Cognition: The Realization of the Living Dordecht.", "title": "" }, { "docid": "566c6e3f9267fc8ccfcf337dc7aa7892", "text": "Research into the values motivating unsustainable behavior has generated unique insight into how NGOs and environmental campaigns contribute toward successfully fostering significant and long-term behavior change, yet thus far this research has not been applied to the domain of sustainable HCI. We explore the implications of this research as it relates to the potential limitations of current approaches to persuasive technology, and what it means for designing higher impact interventions. As a means of communicating these implications to be readily understandable and implementable, we develop a set of antipatterns to describe persuasive technology approaches that values research suggests are unlikely to yield significant sustainability wins, and a complementary set of patterns to describe new guidelines for what may become persuasive technology best practice.", "title": "" }, { "docid": "f48d02ff3661d3b91c68d6fcf750f83e", "text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.", "title": "" }, { "docid": "c3558d8f79cd8a7f53d8b6073c9a7db3", "text": "De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.", "title": "" }, { "docid": "745cdbb442c73316f691dc20cc696f31", "text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.", "title": "" }, { "docid": "f90784e4bdaad1f8ecb5941867a467cf", "text": "Social Networks (SN) Sites are becoming very popular and the number of users is increasing rapidly. However, with that increase there is also an increase in the security threats which affect the users’ privacy, identity and confidentiality. Different research groups highlighted the security threats in SN and attempted to offer some solutions to these issues. In this paper we survey several examples of this research and highlight the approaches. All the models we surveyed were focusing on protecting users’ information yet they failed to cover other important issues. For example, none of the mechanisms provided the users with control over what others can reveal about them; and encryption of images is still not achieved properly. Generally having higher security measures will affect the system’s performance in terms of speed and response time. However, this trade-off was not discussed or addressed in any of the models we surveyed.", "title": "" }, { "docid": "a38986fcee27fb733ec51cf83771a85f", "text": "A tunable broadband inverted microstrip line phase shifter filled with Liquid Crystals (LCs) is investigated between 1.125 GHz and 35 GHz at room temperature. The effective dielectric anisotropy is tuned by a DC-voltage of up to 30 V. In addition to standard LCs like K15 (5CB), a novel highly anisotropic LC mixture is characterized by a resonator method at 8.5 GHz, showing a very high dielectric anisotropy /spl Delta/n of 0.32 for the novel mixture compared to 0.13 for K15. These LCs are filled into two inverted microstrip line phase shifter devices with different polyimide films and heights. With a physical length of 50 mm, the insertion losses are about 4 dB for the novel mixture compared to 6 dB for K15 at 24 GHz. A differential phase shift of 360/spl deg/ can be achieved at 30 GHz with the novel mixture. The figure-of-merit of the phase shifter exceeds 110/spl deg//dB for the novel mixture compared to 21/spl deg//dB for K15 at 24 GHz. To our knowledge, this is the best value above 20 GHz at room temperature demonstrated for a tunable phase shifter based on nonlinear dielectrics up to now. This substantial progress opens up totally new low-cost LC applications beyond optics.", "title": "" }, { "docid": "ab0c80a10d26607134828c6b350089aa", "text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.", "title": "" } ]
scidocsrr
c56221a0fc7102278dab8e346a909d3f
Personal freedom Gambling is a leisure activity enjoyed by many millions of people. Governments should not tell people what they can do with their own money. Those who don’t like gambling should be free to buy adverts warning people against it, but they should not be able to use the law to impose their own beliefs. Online gambling has got rid of the rules that in the past made it hard to gamble for pleasure and allowed many more ordinary people to enjoy a bet from time to time. It provides the freedom to gamble, whenever and wherever and with whatever method the individual prefers.
[ { "docid": "e51474dedeecb206ba3e9c94942ea744", "text": "economic policy law crime policing digital freedoms freedom expression People are not free to do whatever they want whenever they want. When their activities harm society it is the government’s role to step in to prevent that harm. Online gambling simply provides the freedom for more people to get into debt, not a freedom that should be encouraged.\n", "title": "" } ]
[ { "docid": "4c6d1733c619690dbf76333b473b9f45", "text": "economic policy law crime policing digital freedoms freedom expression Gambling is quite different from buying stocks and shares. With the stock market investors are buying a stake in an actual company. This share may rise or fall in value, but so can a house or artwork. In each case there is a real asset that is likely to hold its value in the long term, which isn’t the case with gambling. Company shares and bonds can even produce a regular income through dividend and interest payments. It is true that some forms of financial speculation are more like gambling – for example the derivatives market or short-selling, where the investor does not actually own the asset being traded. But these are not types of investment that ordinary people have much to do with. They are also the kinds of financial activity most to blame for the financial crisis, which suggests we need more government control, not less.\n", "title": "" }, { "docid": "72f61b1a779f57be5a7ea0e8aa7707e5", "text": "economic policy law crime policing digital freedoms freedom expression It is only in the interests of big gambling sites that aim to create a long term business to go along with tough regulation. Online gambling sites can get around government regulations that limit the dangers of betting. Because they can be legally sited anywhere in the world, they can pick countries with no rules to protect customers. In the real world governments can ban bets being taken from children and drunks. They can make sure that the odds are not changed to suit the House. And they can check that people running betting operations don’t have criminal records. In online gambling on the other hand 50% of players believe that internet casino’s cheat [14].\n", "title": "" }, { "docid": "81f981d884a7ebc9c66aa0dd772a5c05", "text": "economic policy law crime policing digital freedoms freedom expression Governments have the power to ban online gambling in their own country. Even if citizens could use foreign websites, most will not choose to break the law. When the United States introduced its Unlawful Internet Gambling Enforcement Act in 2006 gambling among those of college-age fell from 5.8% to 1.5% [12]. Blocking the leading websites will also be effective, as it makes it very hard for them to build a trusted brand. And governments can stop their banks handling payments to foreign gambling companies, cutting off their business.\n", "title": "" }, { "docid": "46ba8fc99d8acbdf158083b449f6ec85", "text": "economic policy law crime policing digital freedoms freedom expression Because people will gamble anyway, the best that governments can do is make sure that their people gamble in safe circumstances. This means real world that casinos and other betting places that can easily be monitored.\n\nThe examples of government using gambling for their own purposes are really the government turning gambling into a benefit for the country. Physical casinos benefit the economy and encourage investment, and lotteries can be used to raise money for good causes. Online gambling undermines all this, as it can be sited anywhere in the world but can still compete with, and undercut organised national betting operations.\n", "title": "" }, { "docid": "de909a7b7e21de332a4bbce9a6430cfa", "text": "economic policy law crime policing digital freedoms freedom expression There is no evidence that gambling prevents people from caring for their family. The vast majority who gamble do so responsibly. It isn’t right to ban something that millions of people enjoy just because a few cause problems. And banning gambling, whether online or in the real world will not stop these problems. Sadly, even if it is illegal, people with problems will still find a way to hurt those around them – just look at drugs.\n", "title": "" }, { "docid": "2e08f5bb359b2c9caf5ce492a01912f0", "text": "economic policy law crime policing digital freedoms freedom expression Criminals will always try to exploit any system, but if governments allow legal online gambling they can regulate it. It is in the interest of gambling companies to build trustworthy brands and cooperate with the authorities on stopping any crime. Cheats in several sports have been caught because legal websites reported strange betting patterns. Betfair for example provides the authorities with an early warning system ‘BetMon’ to watch betting patterns.\n", "title": "" }, { "docid": "154ad68e18b3c20384a606614b4ee484", "text": "economic policy law crime policing digital freedoms freedom expression Unlike drugs, gambling is not physically or metabolically addictive. Most gamblers are not addicts, simply ordinary people who enjoy the excitement of a bet on a sporting event or card game. The large majority of people who gamble online keep to clear limits and stop when they reach them. The few people with a problem with being addicted will still find ways to gamble if gambling is illegal either through a casino, or else still online but in a black market that offers no help and that may use criminal violence to enforce payment.\n", "title": "" }, { "docid": "5ce1d9b6ed0d3b41e470e2807c037972", "text": "economic policy law crime policing digital freedoms freedom expression Every leisure industry attracts a few troubled individuals who take the activity to harmful extremes. For every thousand drinkers there are a few alcoholics. Similarly some sports fans are hooligans. Those who gamble enough to harm themselves would be those who would gamble in casinos if the internet option was not available.\n", "title": "" }, { "docid": "f1a2f9aaec6eb4fa051fe97e1a9952e2", "text": "economic policy law crime policing digital freedoms freedom expression Government only objects to online gambling because they dont benefit\n\nGovernments are hypocritical about gambling. They say they don’t like it but they often use it for their own purposes. Sometimes they only allow gambling in certain places in order to boost a local economy. Sometimes they profit themselves by running the only legal gambling business, such as a National Lottery [15] or public racecourse betting. This is bad for the public who want to gamble. Online gambling firms can break through government control by offering better odds and attractive new games.\n", "title": "" }, { "docid": "bcf30ccecd8726747480c24d543ef251", "text": "economic policy law crime policing digital freedoms freedom expression Cant enforce an online gambling ban\n\nGovernments can’t actually do anything to enforce a ban on the world wide web. Domestic laws can only stop internet companies using servers and offices in their own country. They cannot stop their citizens going online to gamble using sites based elsewhere. Governments can try to block sites they disapprove of, but new ones will keep springing up and their citizens will find ways around the ban. So practically there is little the government can do to stop people gambling online. Despite it being illegal the American Gambling Association has found that 4% of Americans already engage in online gambling [11].\n", "title": "" }, { "docid": "4da6f98c448e1b1d7fc1482abcb0da32", "text": "economic policy law crime policing digital freedoms freedom expression Other forms of online gambling\n\nWhat is the difference between gambling and playing the stock market? In each case people are putting money at risk in the hope of a particular outcome. Gambling on horse-racing or games involves knowledge and expertise that can improve your chances of success. In the same way, trading in bonds, shares, currency or derivatives is a bet that your understanding of the economy is better than that of other investors. Why should one kind of online risk-taking be legal and the other not?\n", "title": "" }, { "docid": "f026abdff01f2a90b1308cbeeb08af16", "text": "economic policy law crime policing digital freedoms freedom expression Only regulation can mitigate harms\n\nIt is where the sites operate, not where they are set up that matters for regulation. It is in gambling sites interest to run a trustworthy, responsible business. Whatever they are looking for online, internet users choose trusted brands that have been around for a while. If a gambling site acts badly, for example by changing its odds unfairly, word will soon get around and no one will want to use it. Regulation will mean that sites will have to verify the age of their users and prevent problem gamblers from accessing their site. When there is regulation consumers will go to the sites that are verified by their government and are providing a legal, safe service [13].\n", "title": "" }, { "docid": "d75df3012dd41644ccfcc97c5b9b7a79", "text": "economic policy law crime policing digital freedoms freedom expression Online gambling affects families\n\nA parent who gambles can quickly lose the money their family depends on for food and rent. It is a common cause of family break-up and homelessness, so governments should get involved to protect innocent children from getting hurt [5]. Each problem gambler harmfully impacts 10-15 other people [6]. The internet makes it easy for gamblers to bet secretly, without even leaving the house, so people become addicted to gambling without their families realising what is going on until too late.\n", "title": "" }, { "docid": "eaab866fdf1b9283debf296a7cdf07be", "text": "economic policy law crime policing digital freedoms freedom expression Gambling is addictive.\n\nHumans get a buzz from taking a risk and the hope that this time their luck will be in, this is similar to drug addicts [7]. The more people bet, the more they want to bet, so they become hooked on gambling which can wreck their lives. Internet gambling is worse because it is not a social activity. Unlike a casino or race track, you don’t have to go anywhere to do it, which can put a brake on the activity. The websites never shut. There won’t be people around you to talk you out of risky bets. There is nothing to stop you gambling your savings away while drunk.\n", "title": "" }, { "docid": "73f1009aced88d08400ec728176354d6", "text": "economic policy law crime policing digital freedoms freedom expression Gambling is bad for you.\n\nGamblers may win money from time to time, but in the long run, the House always wins. Why should governments allow an activity that helps their citizens lose the money they have worked so hard to earn? The harm is not just the loss of money and possible bankruptcy; it causes depression, insomnia, and other stress related disorders [4]. The internet has made gambling so much easier to do and encouraged lots of new people to place bets so dramatically multiplying the harm.\n", "title": "" }, { "docid": "14be38e43a7e16f44a2871a450dccbe5", "text": "economic policy law crime policing digital freedoms freedom expression Online gambling encourages crime\n\nHuman trafficking, forced prostitution and drugs provide $2.1 billion a year for the Mafia but they need some way through which to put this money into circulation. Online gambling is that way in. They put dirty money in and win clean money back [8]. Because it is so international and outside normal laws, it makes criminal cash hard to track. There is a whole array of other crime associated with online gambling; hacking, phishing, extortion, and identity fraud, all of which can occur on a large scale unconstrained by physical proximity [9]. Online gambling also encourages corruption in sport. By allowing huge sums of money to be bet internationally on the outcome of a game or race, it draws in criminals who can try to bribe or threaten sportsmen.\n", "title": "" } ]
arguana
461eeab1e14bf2bb3cd7386712861f12
The prevention of atrocities during war and unrest. In the past, horrific crimes could be committed in war zones without anyone ever knowing about it, or with news of it reaching the international community with a significant time lag, when it was too late to intervene. But with the presence of internet connected mobile devices everywhere, capable of uploading live footage within seconds of an event occurring, the entire world can monitor and find out what is happening on the scene, in real time. It lets repressive regimes know the entire world is watching them, that they cannot simply massacre their people with impunity, and it creates evidence for potential prosecutions if they do. It, therefore, puts pressure on them to respect the rights of their citizens during such precarious times. To prevent governments from violently stamping out public political dissent without evidence, internet access must be preserved, especially in times of war or political unrest. [1] [1] Bildt, Carl, 2012. “A Victory for The Internet”. New York Times. 5 July 2012.
[ { "docid": "55f34c7e064bd48c7274695b7a81afb4", "text": "government terrorism digital freedoms access information should Being able to witness atrocities from the field in real time does not change the international community’s capacity or political willingness to intervene in such situations. If anything, it has had the unfortunate side effect of desensitizing international public opinion to the horrors of war and conflicts, like the one in Syria where there have been thousands of videos showing the actions of the Syrian government but this has not resulted in action from the international community. [1] The onslaught of gruesome, graphic imagery has made people more used to witnessing such scenes from afar and less likely to be outraged and to ask their governments to intervene.\n\n[1] Harding, Luke, 2012. “Syria’s video activists give revolution the upper hand in media war”. Guardian.co.uk, 1 August 2012.\n", "title": "" } ]
[ { "docid": "ea123c1aaad9989c7b7cfaf3f5f308b7", "text": "government terrorism digital freedoms access information should Freedom of expression, assembly, and information are important rights, but restrictions can be placed on all of them if a greater good, like public safety, is at stake. For example, one cannot use her freedom of expression to incite violence towards others and many countries regard hate speech as a crime. [1] Therefore, if the internet is being used for such abuses of ones rights, the disruption of service, even to a large number of people, can be entirely warranted.\n\n[1] Waldron, Jeremy, The Harm in Hate Speech, Harvard University Press, 8 June 2012, p.8.\n", "title": "" }, { "docid": "f4dd344282c44b8d35ea262291f484c4", "text": "government terrorism digital freedoms access information should Democratic change can come about in a variety of ways. Violent public protests are only one such way, and probably the least desirable one. And now, with access to social media nearly universally available, such protests can be organized faster, on a larger, more dangerous scale than ever before. It encourages opposition movements and leaders in such countries to turn away from incremental, but peaceful changes through political negotiations, and to appeal to mass protests instead, thus endangering the life or their supporters and that of the general public. Governments that respond to violence by cutting off access are not responding with repression but simply trying to reduce the violence. Cutting internet access is a peaceful means of preventing organized violence that potentially saves lives by preventing confrontation between violent groups and riot police.\n", "title": "" }, { "docid": "ceef1f7e5b30d1ba0b21509db0e696da", "text": "government terrorism digital freedoms access information should Historical precedent does not apply to the internet. It is very different to media reporting during times of unrest; the internet is not just a means of disseminating information but also for many people their main form of communication; the U.S. government has never tried to ban people from using telephones. There are severe downsides to the censorship of information during times of war or civil unrest, the most notable one being that it is used to hide the real cost and consequences of war from the population which is expected to support it. Conversely, in a world where every mobile phone is now connected to a global network, people all around the world can have access to an unparalleled amount of information from the field. Curtailing such internet access is to their detriment.\n", "title": "" }, { "docid": "5f1aef8d29eafd3f70f7c92067f6339b", "text": "government terrorism digital freedoms access information should Other means can be employed to ensure the safety of the population without disrupting access to the internet, like deploying security forces to make sure protests don’t get out of hand or turn violent. In fact, being able to monitor online activity through social media like Facebook and Twitter might actually aid, rather than hinder law enforcement in ensuring the safety of the public. London’s police force, the Metropolitan Police, in the wake of the riots has are using software to monitor social media to predict where social disorder may take place. [1]\n\n[1] Adams, Lucy, 2012. “Police develop technology to monitor social neworks”. Heraldscotland, 6 August 2012.\n", "title": "" }, { "docid": "432d37713306c981c63f858686094fc4", "text": "government terrorism digital freedoms access information should In July 2012, The United Nations Human Rights Council endorsed a resolution upholding the principle of freedom of expression and information on the internet. In a special report, it also “called upon all states to ensure that Internet access is maintained at all times, including during times of political unrest” [1] . While access to the internet has not yet had time to establish itself legally as a human right, there are compelling reasons to change its legal status, and the UN is leading the charge. Even before internet access is recognized as a human right the idea that national security should take precedence over ‘lesser rights’ is wrong; states should not survive at the expense of the rights of their citizens. States exist to protect their citizens not harm them.\n\n[1] Kravets, David, 2011. “UN Report Declares Internet Access a Human Right”. Wired.com, 6 November 2011.\n", "title": "" }, { "docid": "cf47f900746702d040833d9df8416bee", "text": "government terrorism digital freedoms access information should Disrupting internet service is a form of repression.\n\nThe organization of public protests is an invaluable right for citizens living under the rule of oppressive regimes. Like in the case of the Arab Spring, internet access gives them the tools to mobilize, make their message heard, and demand greater freedoms. In such cases, under the guise of concern for public safety, these governments disrupt internet service in an attempt to stamp out legitimate democratic protests and stamp out the dissatisfied voices of their citizens [1] They are concerned not for the safety of the public, but to preserve their own grasp on power. A good example of this are the actions of the government of Myanmar when in 2007 in response to large scale protests the government cut internet access to the whole country in order to prevent reports of the government’s crackdown getting out. [2] Establishing internet access as a fundamental right at international level would make it clear to such governments that they cannot simply cut access as a tactic to prevent legitimate protests against them.\n\n[1] The Telegraph. “Egypt. Internet Service Disrupted Before Large Rally”. 28 January 2011.\n\n[2] Tran, Mark, 2007. “Internet access cut off in Burma”. Guardian.co.uk, 28 September 2007.\n", "title": "" }, { "docid": "2c322b6919bed304eaa50dba196afc8f", "text": "government terrorism digital freedoms access information should The right to internet access as a fundamental right.\n\nInternet access is a “facilitative right”, in that it facilitates access to the exercise of many other rights: like freedom of expression, information, and assembly. It is a “gateway right”. Possessing a right is only as valuable as your capacity to exercise it. A government cannot claim to protect freedom of speech or expression, and freedom of information, if it is taking away from its citizens the tools to access them. And that is exactly what the disruption of internet service does. Internet access needs to be a protected right so that all other rights which flow from it. [1]\n\nThe Internet is a tool of communication so it is important not just to individuals but also to communities. The internet becomes an outlet that can help to preserve groups’ culture or language [2] and so as an enabler of this groups’ culture access to the internet may also be seen as a group right – one which would be being infringed when the state cuts off access to large numbers of individuals.\n\n[1] BBC, 2010. “Internet Access is ‘a Fundamental Right’\".\n\n[2] Jones, Peter, 2008. \"Group Rights\", The Stanford Encyclopedia of Philosophy (Winter 2008 Edition), Edward N. Zalta (ed.).\n", "title": "" }, { "docid": "b174a22c6e88b863f97d61570a80dd8c", "text": "government terrorism digital freedoms access information should Historical precedent.\n\nHistorically, governments have always controlled the access to information and placed restriction on media during times of war. This is an entirely reasonable policy and is done for a number of reasons: to sustain morale and prevent predominantly negative stories from the battlefield reaching the general public, and to intercept propaganda from the enemy, which might endanger the war effort [1] . For example, both Bush administrations imposed media blackouts during wartime over the return of the bodies of dead American soldiers at Dover airport [2] . The internet is simply a new medium of transmitting information, and the same principles can be applied to its regulation, especially when the threat to national security is imminent, like in the case of disseminating information for the organization of a violent protest.\n\n[1] Payne, Kenneth. 2005. “The Media as an Instrument of War”. Parameters, Spring 2005, pp. 81-93.\n\n[2] BBC, 2009. “US War Dead Media Blackout Lifted”.\n", "title": "" }, { "docid": "8a89fc13e9fd39fe304ec49b0a276003", "text": "government terrorism digital freedoms access information should The internet as a threat to public safety.\n\nThe internet can be used as a tool to create an imminent threat to the public. If public officials had information that a massive protest is being organized, which could spiral into violence and endanger the safety of the public, it would be irresponsible for the government not to try to prevent such a protest. Governments are entrusted with protecting public safety and security, and not preventing such a treat would constitute a failure in the performance of their duties [1] . An example of this happening was the use first of Facebook and twitter and then of Blackberry messenger to organise and share information on the riots in London in the summer of 2011. [2]\n\n[1] Wyatt, Edward, 2012. “FCC Asks for Guidance on Whether, and When to Cut Off Cellphone Service.” New York Times, 2 March 2012.\n\n[2] Halliday, Josh, 2011. “London riots: how BlackBerry Messenger played a key role”. Guardian.co.uk, 8 August 2011.\n", "title": "" }, { "docid": "d94f0651ec750205a84309e1ff377d1b", "text": "government terrorism digital freedoms access information should National security takes precedence.\n\nInternet access is not a fundamental right as recognized by any major human rights convention, if it can be called a right at all. [1] Even if we accept that people should have a right to internet access, in times of war or civil unrest the government should be able to abridge lesser rights for the sake of something that is critical to the survival of the state, like national security. After all, in a war zone few rights survive or can be upheld at all. Preventing such an outcome at the expense of the temporary curtailment of some lesser rights is entirely justified. Under current law, in most states, only the most fundamental of rights, like the right to life, prohibition against torture, slavery, and the right to a fair trial are regarded as inalienable [2] .\n\n[1] For more see the debatabase debate on internet access as a human right.\n\n[2] Article 15 of the European Convention on Human rights: “In time of war or other public emergency threatening the life of the nation any High Contracting Party may take measures derogating from its obligations under this Convention to the extent strictly required by the exigencies of the situation, provided that such measures are not inconsistent with its other obligations under international law.” http://www.hri.org/docs/ECHR50.html\n", "title": "" } ]
arguana
5b776aab45357da037378f8abdd186aa
Even the most liberal FoI regime tends to pander to certain groups in society full disclosure levels that playing field People have many different interests in the accountability of governments; different areas of concern, differing levels of skill in pursuing those interests and so on. They deserve, however, an equal degree of transparency from governments in relation to those decisions that affect them. Relying on a right to access is almost certainly most likely to favour those who already have the greatest access either through their profession, their skills or their social capital. The use of freedom of information requests in those countries where they are available shows this to be the case, as they have overwhelmingly been used by journalists, with a smattering of representation from researchers, other politicians and lawyers and so on. In the UK between 2005 and 2010 the total number registered by all ‘ordinary’ members of the public is just ahead of journalists, the next largest group. The public are overwhelmingly outnumbered by the listed professional groups [i] . Required publication, by contrast, presents an even playing field to all parties. Rather than allowing legislators to determine how and to whom – and for what – they should be accountable, a presumption in favour of publication makes them accountable to all. As a result, it is the only truly effective way of ensuring one of the key aims set out in favour of any freedom of information process. [i] Who Makes FOI Requests? BBC Open Secrets Website. 14 January 2011.
[ { "docid": "7b3bcfa525c738e042848d9dcc690876", "text": "governmental transparency house believes there should be presumption The idea that, presented with a vast mass of frequently complex data, everyone would be able to access, process and act on it in the same way is fantasy. Equally the issue of ‘who guards the guards’ that Proposition raises is a misnomer; exactly the groups mentioned are already those with the primary role of scrutinizing government actions because they have the time, interest and skills to do so. Giving a right to access would give them greater opportunities to continue with that in a way that deluging them with information would not.\n", "title": "" } ]
[ { "docid": "d4f713d94dccc069709e797e465a937a", "text": "governmental transparency house believes there should be presumption Governments have, prima facie, a different relationship with their own citizens than they have with those of other countries. In addition, as with the previous argument, extending the right of access does not, per se, require total access. The approach is also simply impractical as it would require every nation on the planet to take the same approach and to have comparable standards in terms of record keeping and data management. At present most states publish some data but the upper and lower thresholds of what is made public vary between them. To abolish the upper limit (ministerial briefing, security briefings, military contractors, etc.) would require everyone to do it, otherwise it would be deeply unsafe for any one state to act alone. The likelihood of persuading some of the world’s more unsavory or corrupt regimes to play ball seems pretty unlikely. The first of those is improbable, the latter is impossible.\n", "title": "" }, { "docid": "4fea4045c8b6854771a433c1d46fd29a", "text": "governmental transparency house believes there should be presumption It seems unlikely that total publication would save much in the way of time or money. If the data was not indexed in some way it would be absurdly difficult to navigate - and that takes time and money.\n\nThere are advantages to building a delay into systems such as this, if a piece of information genuinely justifies a news story, then it will do so at any time. If it’s only of interest in the middle of a media feeding frenzy, then it seems unlikely that it was all that important.\n", "title": "" }, { "docid": "11d2f7bac64bf74b4df42e19dfe53fa5", "text": "governmental transparency house believes there should be presumption Relying on a right of access would also have addressed the concerns set out by Proposition but would do so in a way that would not endanger actual concerns of national security by allowing citizens the right to challenge such decisions. An independent review could determine where the motivation is genuinely one of national security and those where it is really political expediency. The right to information for citizens is important but should not jeopardize the right to life of combat troops.\n", "title": "" }, { "docid": "232325d4d20cc6e83e9a56d494081b9c", "text": "governmental transparency house believes there should be presumption Although it would be time-consuming to approach so much information, it is not impossible to manage it effectively. As Wikileaks has demonstrated, given access to large quantities of information, it is a relatively straightforward process to start with records that are likely to prove interesting and then follow particular routes from there. In addition, governments, like all organisations, have information management systems, there would be no reason not to use the same model.\n\nAdditionally, the very skill of journalism is going beyond the executive summary to find the embarrassing fact buried away in appendix nineteen. That would still be the case under this model, it would just be easier.\n", "title": "" }, { "docid": "a193d58b0d74ee2c66795b06f88ee150", "text": "governmental transparency house believes there should be presumption There are, of course some costs to having a truly open and accountable government, but an effective right of access would allow much of that information to be made available. After all what the public sector bodies are paying in commercial transactions is of great interest to the public. If public bodies are getting a particularly good rate from suppliers, it might well raise the question of “Why?” For example, are they failing to enforce regulations on a particular supplier in return for a good price. In that instance, their other customers and their competitors would seem to have every right to know.\n", "title": "" }, { "docid": "db65e38d3bc772a6d4d1e7dd8071fe5e", "text": "governmental transparency house believes there should be presumption It is frequently useful to see the general approach of a public organisation as reflected in routine discussions. Opposition is wrong to suggest that such information would only cast a light on ideas that were never pursued anyway so they don’t matter. It would also highlight ideas that agencies wanted to pursue but felt they couldn’t because of the likely impact of public opinion, knowing such information gives useful insight into the intentions of the public agency in question.\n", "title": "" }, { "docid": "dee8cac711700d293b9218914332fecb", "text": "governmental transparency house believes there should be presumption Compelling public bodies to publish information ensures that non-citizens, minors, foreign nationals and others have access to information that affects them.\n\nGenuine transparency and accountability of government action is not only in the interests of those who also have the right to vote for that government or who support it through the payment of taxes. The functioning of immigration services would seem to be a prime example. Maximising access to information relating to government decisions by dint of its automatic publication of information relating to those decisions ensures that all those affected will have recourse to the facts behind any decision.\n\nIf, for example, a nation’s aid budget is cut or redirected, why should the citizens of the affected nation not have a right to know why [i] ? If, as is frequently the case, it has happened because of an action or inaction by their own government, then it is important that they know. Equally if such a decision were taken for electoral gain, they at least have the right to know that there is nothing they or their government could do about it.\n\n[i] Publish What You Fund: The Global Campaign For Aid Transparency. Website Introduction.\n", "title": "" }, { "docid": "5374802042af0cfbda4884a42493e865", "text": "governmental transparency house believes there should be presumption If public bodies do not have an obligation to publish information, there will always be a temptation to find any available excuses to avoid transparency.\n\nThe primary advantage of putting the duty on government to publish, rather than on citizens to enquire is that it does not require the citizen to know what they need to know before they know it. Publication en masse allows researchers to investigate areas they think are likely to produce results, specialists to follow decisions relevant to their field and, also, raises the possibility of discovering things by chance. The experience of Wikipedia suggests that even very large quantities of data are relatively easy to mine as long as all the related documentation is available to the researcher – the frustration, by contrast, comes when one has only a single datum with no way of contextualising it. Any other situation, at the very least, panders to the interests of government to find any available excuse for not publishing anything that it is likely to find embarrassing and, virtually by definition, would be of most interest to the active citizen.\n\nKnowing that accounts of discussions, records of payments, agreements with commercial bodies or other areas that might be of interest to citizens will be published with no recourse to ‘national security’ or ‘commercial sensitivity’ is likely to prevent abuses before they happen but will certainly ensure that they are discovered after the event [i] .\n\nThe publication of documents, in both Washington and London, relating to the build-up to war in Iraq is a prime example of where both governments used every available excuse to cover up the fact that that the advice they had been given showed that either they were misguided or had been deliberately lying [ii] . A presumption of publication would have prevented either of those from determining a matter of vital interest to the peoples of the UK, the US and, of course, Iraq. All three of those groups would have had access to the information were there a presumption of publication.\n\n[i] The Public’s Right To Know. Article 19 Global Campaign for Freedom of Expression.\n\n[ii] Whatreallyhappened.com has an overview of this an example of how politicians were misguided – wilfully or otherwise can be found in: Defector admits to lies that triggered the Iraq War. Martin Chulov and Helen Pidd. The Guardian. 15 February 2011.\n", "title": "" }, { "docid": "8c4c0fdbffcf784e055898595f30aa52", "text": "governmental transparency house believes there should be presumption A faster, cheaper and simpler process\n\nThere are cost concerned with processing FoI requests both in terms of time and cash terms. [i] To take one example Britain’s largest local authority, Birmingham, spends £800,000 a year dealing with FoI requests. [ii] There is also a delay from the point of view of the applicant. Such a delay is more than an irritant in the case of, for example, immigration appeals or journalistic investigations. Governments know that journalists usually have to operate within a window of time while a story is still ‘hot’. As a result all they have to do is wait it out until the attention of the media turns elsewhere to ensure that if evidence of misconduct or culpability were found, it would probably be buried as a minor story if not lost altogether. As journalism remains the primary method most societies have of holding government to account, it doesn’t seem unreasonable that the methodology for releasing data should, at least in part, reflect the reality of how journalism works as an industry.\n\n[i] Independent Review of the Impact of the Freedom of Information Act. Frontier Economics. October 2006.\n\n[ii] Dunton, Jim, ‘Cost of FoI requests rises to £34m’, Local Government Chronicle, 16 September 2010, http://www.lgcplus.com/briefings/corporate-core/legal/cost-of-foi-requests-rises-to-34m/5019109.article\n", "title": "" }, { "docid": "9d7a80e90b11471fe5dc3a768893fe57", "text": "governmental transparency house believes there should be presumption Public bodies require the ability to discuss proposals freely away from public scrutiny\n\nKnowing that everything is likely to be recorded and then published is likely to be counter-productive. It seems probable that anything sensitive – such as advice given to ministers by senior officials – would either not be recorded or it would be done in a way so opaque as to make it effectively meaningless [i] .\n\nBy contrast knowing that such conversations, to focus on one particularly example, are recorded and can be subjected to public scrutiny when there is a proven need to do so ensures that genuine accountability – rather than prurience or curiosity, is likely to be both the goal and the outcome.\n\nNone of us would like the process of how we reached decisions made public as it often involves getting things wrong a few times first. However, there are some instances where it is important to know how a particular decision was reached and whether those responsible for that decision were aware of certain facts at the time – notably when public figures are claiming that they were not aware of something and others are insisting that they were. In such an instance the right to access is useful and relevant; having records of every brainstorming session in every public body is not. As the Leveson inquiry is discovering, an extraordinary amount of decisions in government seem to be made informally, by text message or chats at parties. Presumably that would become evermore the case if every formal discussion were to be published [ii] .\n\n[i] The Pitfalls of Britain’s Confidential Civil Service. Samuel Brittan. Financial Time 5 March 2010.\n\n[ii] This is nothing very new, see: Downing Street: Informal Style. BBC website. 14 July 2004.\n", "title": "" }, { "docid": "80e542c82e023c64f73b6a865739240e", "text": "governmental transparency house believes there should be presumption Considering the amount of data governments produce, compelling them to publish all of it would be counterproductive as citizens would be swamped.\n\nIt is a misnomer in many things that more is necessarily better but that is, perhaps, more true of information than of most things. Public bodies produce vast quantities of data and are often have a greater tendency to maintain copious records than their private sector equivalents. US government agencies will create data that would require “20 million four-drawer filing cabinets filled with text,” over the next two years. [i] Simply dumping this en masse would be a fairly effective way of masking any information that a public body wanted kept hidden. Deliberately poor referencing would achieve the same result. This ‘burying’ of bad news at a time when everyone is looking somewhere else is one of the oldest tricks in press management. For example Jo Moore, an aide to then Transport Secretary Stephen Byers suggested that September 11 2001 was “a very good day to get out anything we want to bury.” Suggesting burying a u turn on councillors’ expenses. [ii]\n\nFor it to genuinely help with the transparency and accountability of public agencies it would require inordinately detailed and precise cataloguing and indexing – a process that would be likely to be both time consuming and expensive. The choice would, therefore, be between a mostly useless set of data that would require complex mining by those citizens who were keen to use it or the great expense of effectively cataloguing it in advance. Even this latter option would defeat the objective of greater accountability because whoever had responsibility for the cataloguing would have far greater control of what would be likely to come to light.\n\nInstead ensuring a right of access for citizens ensures that they can have a reasonable access to exactly the piece of information they are seeking [iii] .\n\n[i] Eddy, Nathan, ‘Big Data Still a Big Challenge for Government IT’, eweek, 8th May 2012, http://www.eweek.com/c/a/Government-IT/Big-Data-Still-a-Big-Challenge-fo...\n\n[ii] Sparrow, Andrew, ‘September 11: ‘a good day to bury bad news’’, The Telegraph, 10 October 2001, http://www.telegraph.co.uk/news/uknews/1358985/Sept-11-a-good-day-to-bury-bad-news.html\n\n[iii] Freedom of Information as an Internationally Protected Human Right. Toby Mendel, Head of Law at Article 19.\n", "title": "" }, { "docid": "36e797eb873255c50c67625bc900fb12", "text": "governmental transparency house believes there should be presumption It is reasonable that people have access to information that effects them personally but not information that relates to their neighbours’, employers’, former-partners’ or other citizens who maythose who work for public bodies.\n\nThe right to access allows people to see information that affects them personally or where there is reasonable suspicion of harm or nefarious practices. It doesn’t allow them to invade the privacy of other citizens who just happen to work for public bodies or have some other association [i] .\n\nUnless there is reason to suspect corruption, why should law-abiding citizens who sell goods and services to public bodies have the full details of their negotiations made public for their other buyers, who may have got a worse deal, to see? Why should the memo sent by an otherwise competent official on a bad day be made available for her neighbours to read over? A presumption in favour of publication would ensure that all of these things, and others, would be made a reality with the force of law behind them.\n\nThis would place additional burdens on government in terms of recruitment and negotiations with private firms – not to mention negotiations with other governments with less transparent systems. Let’s assume for the moment that the British government introduced a system, it is quite easy imagine a sense of “For God’s sake don’t tell the British” spreading around the capitals of the world fairly quickly.\n\n[i] Section 40 0(A) od the FOIA. See also Freedom of Information Act Environmental Information Regulations. When Should Salaries be Disclosed? Information Commissioner’s Office.\n", "title": "" } ]
arguana
3c9285980203cb19289cae2bc9166838
It is reasonable that people have access to information that effects them personally but not information that relates to their neighbours’, employers’, former-partners’ or other citizens who maythose who work for public bodies. The right to access allows people to see information that affects them personally or where there is reasonable suspicion of harm or nefarious practices. It doesn’t allow them to invade the privacy of other citizens who just happen to work for public bodies or have some other association [i] . Unless there is reason to suspect corruption, why should law-abiding citizens who sell goods and services to public bodies have the full details of their negotiations made public for their other buyers, who may have got a worse deal, to see? Why should the memo sent by an otherwise competent official on a bad day be made available for her neighbours to read over? A presumption in favour of publication would ensure that all of these things, and others, would be made a reality with the force of law behind them. This would place additional burdens on government in terms of recruitment and negotiations with private firms – not to mention negotiations with other governments with less transparent systems. Let’s assume for the moment that the British government introduced a system, it is quite easy imagine a sense of “For God’s sake don’t tell the British” spreading around the capitals of the world fairly quickly. [i] Section 40 0(A) od the FOIA. See also Freedom of Information Act Environmental Information Regulations. When Should Salaries be Disclosed? Information Commissioner’s Office.
[ { "docid": "a193d58b0d74ee2c66795b06f88ee150", "text": "governmental transparency house believes there should be presumption There are, of course some costs to having a truly open and accountable government, but an effective right of access would allow much of that information to be made available. After all what the public sector bodies are paying in commercial transactions is of great interest to the public. If public bodies are getting a particularly good rate from suppliers, it might well raise the question of “Why?” For example, are they failing to enforce regulations on a particular supplier in return for a good price. In that instance, their other customers and their competitors would seem to have every right to know.\n", "title": "" } ]
[ { "docid": "232325d4d20cc6e83e9a56d494081b9c", "text": "governmental transparency house believes there should be presumption Although it would be time-consuming to approach so much information, it is not impossible to manage it effectively. As Wikileaks has demonstrated, given access to large quantities of information, it is a relatively straightforward process to start with records that are likely to prove interesting and then follow particular routes from there. In addition, governments, like all organisations, have information management systems, there would be no reason not to use the same model.\n\nAdditionally, the very skill of journalism is going beyond the executive summary to find the embarrassing fact buried away in appendix nineteen. That would still be the case under this model, it would just be easier.\n", "title": "" }, { "docid": "db65e38d3bc772a6d4d1e7dd8071fe5e", "text": "governmental transparency house believes there should be presumption It is frequently useful to see the general approach of a public organisation as reflected in routine discussions. Opposition is wrong to suggest that such information would only cast a light on ideas that were never pursued anyway so they don’t matter. It would also highlight ideas that agencies wanted to pursue but felt they couldn’t because of the likely impact of public opinion, knowing such information gives useful insight into the intentions of the public agency in question.\n", "title": "" }, { "docid": "d4f713d94dccc069709e797e465a937a", "text": "governmental transparency house believes there should be presumption Governments have, prima facie, a different relationship with their own citizens than they have with those of other countries. In addition, as with the previous argument, extending the right of access does not, per se, require total access. The approach is also simply impractical as it would require every nation on the planet to take the same approach and to have comparable standards in terms of record keeping and data management. At present most states publish some data but the upper and lower thresholds of what is made public vary between them. To abolish the upper limit (ministerial briefing, security briefings, military contractors, etc.) would require everyone to do it, otherwise it would be deeply unsafe for any one state to act alone. The likelihood of persuading some of the world’s more unsavory or corrupt regimes to play ball seems pretty unlikely. The first of those is improbable, the latter is impossible.\n", "title": "" }, { "docid": "4fea4045c8b6854771a433c1d46fd29a", "text": "governmental transparency house believes there should be presumption It seems unlikely that total publication would save much in the way of time or money. If the data was not indexed in some way it would be absurdly difficult to navigate - and that takes time and money.\n\nThere are advantages to building a delay into systems such as this, if a piece of information genuinely justifies a news story, then it will do so at any time. If it’s only of interest in the middle of a media feeding frenzy, then it seems unlikely that it was all that important.\n", "title": "" }, { "docid": "7b3bcfa525c738e042848d9dcc690876", "text": "governmental transparency house believes there should be presumption The idea that, presented with a vast mass of frequently complex data, everyone would be able to access, process and act on it in the same way is fantasy. Equally the issue of ‘who guards the guards’ that Proposition raises is a misnomer; exactly the groups mentioned are already those with the primary role of scrutinizing government actions because they have the time, interest and skills to do so. Giving a right to access would give them greater opportunities to continue with that in a way that deluging them with information would not.\n", "title": "" }, { "docid": "11d2f7bac64bf74b4df42e19dfe53fa5", "text": "governmental transparency house believes there should be presumption Relying on a right of access would also have addressed the concerns set out by Proposition but would do so in a way that would not endanger actual concerns of national security by allowing citizens the right to challenge such decisions. An independent review could determine where the motivation is genuinely one of national security and those where it is really political expediency. The right to information for citizens is important but should not jeopardize the right to life of combat troops.\n", "title": "" }, { "docid": "9d7a80e90b11471fe5dc3a768893fe57", "text": "governmental transparency house believes there should be presumption Public bodies require the ability to discuss proposals freely away from public scrutiny\n\nKnowing that everything is likely to be recorded and then published is likely to be counter-productive. It seems probable that anything sensitive – such as advice given to ministers by senior officials – would either not be recorded or it would be done in a way so opaque as to make it effectively meaningless [i] .\n\nBy contrast knowing that such conversations, to focus on one particularly example, are recorded and can be subjected to public scrutiny when there is a proven need to do so ensures that genuine accountability – rather than prurience or curiosity, is likely to be both the goal and the outcome.\n\nNone of us would like the process of how we reached decisions made public as it often involves getting things wrong a few times first. However, there are some instances where it is important to know how a particular decision was reached and whether those responsible for that decision were aware of certain facts at the time – notably when public figures are claiming that they were not aware of something and others are insisting that they were. In such an instance the right to access is useful and relevant; having records of every brainstorming session in every public body is not. As the Leveson inquiry is discovering, an extraordinary amount of decisions in government seem to be made informally, by text message or chats at parties. Presumably that would become evermore the case if every formal discussion were to be published [ii] .\n\n[i] The Pitfalls of Britain’s Confidential Civil Service. Samuel Brittan. Financial Time 5 March 2010.\n\n[ii] This is nothing very new, see: Downing Street: Informal Style. BBC website. 14 July 2004.\n", "title": "" }, { "docid": "80e542c82e023c64f73b6a865739240e", "text": "governmental transparency house believes there should be presumption Considering the amount of data governments produce, compelling them to publish all of it would be counterproductive as citizens would be swamped.\n\nIt is a misnomer in many things that more is necessarily better but that is, perhaps, more true of information than of most things. Public bodies produce vast quantities of data and are often have a greater tendency to maintain copious records than their private sector equivalents. US government agencies will create data that would require “20 million four-drawer filing cabinets filled with text,” over the next two years. [i] Simply dumping this en masse would be a fairly effective way of masking any information that a public body wanted kept hidden. Deliberately poor referencing would achieve the same result. This ‘burying’ of bad news at a time when everyone is looking somewhere else is one of the oldest tricks in press management. For example Jo Moore, an aide to then Transport Secretary Stephen Byers suggested that September 11 2001 was “a very good day to get out anything we want to bury.” Suggesting burying a u turn on councillors’ expenses. [ii]\n\nFor it to genuinely help with the transparency and accountability of public agencies it would require inordinately detailed and precise cataloguing and indexing – a process that would be likely to be both time consuming and expensive. The choice would, therefore, be between a mostly useless set of data that would require complex mining by those citizens who were keen to use it or the great expense of effectively cataloguing it in advance. Even this latter option would defeat the objective of greater accountability because whoever had responsibility for the cataloguing would have far greater control of what would be likely to come to light.\n\nInstead ensuring a right of access for citizens ensures that they can have a reasonable access to exactly the piece of information they are seeking [iii] .\n\n[i] Eddy, Nathan, ‘Big Data Still a Big Challenge for Government IT’, eweek, 8th May 2012, http://www.eweek.com/c/a/Government-IT/Big-Data-Still-a-Big-Challenge-fo...\n\n[ii] Sparrow, Andrew, ‘September 11: ‘a good day to bury bad news’’, The Telegraph, 10 October 2001, http://www.telegraph.co.uk/news/uknews/1358985/Sept-11-a-good-day-to-bury-bad-news.html\n\n[iii] Freedom of Information as an Internationally Protected Human Right. Toby Mendel, Head of Law at Article 19.\n", "title": "" }, { "docid": "dee8cac711700d293b9218914332fecb", "text": "governmental transparency house believes there should be presumption Compelling public bodies to publish information ensures that non-citizens, minors, foreign nationals and others have access to information that affects them.\n\nGenuine transparency and accountability of government action is not only in the interests of those who also have the right to vote for that government or who support it through the payment of taxes. The functioning of immigration services would seem to be a prime example. Maximising access to information relating to government decisions by dint of its automatic publication of information relating to those decisions ensures that all those affected will have recourse to the facts behind any decision.\n\nIf, for example, a nation’s aid budget is cut or redirected, why should the citizens of the affected nation not have a right to know why [i] ? If, as is frequently the case, it has happened because of an action or inaction by their own government, then it is important that they know. Equally if such a decision were taken for electoral gain, they at least have the right to know that there is nothing they or their government could do about it.\n\n[i] Publish What You Fund: The Global Campaign For Aid Transparency. Website Introduction.\n", "title": "" }, { "docid": "49a5860842c98055000dd5751d43f596", "text": "governmental transparency house believes there should be presumption Even the most liberal FoI regime tends to pander to certain groups in society full disclosure levels that playing field\n\nPeople have many different interests in the accountability of governments; different areas of concern, differing levels of skill in pursuing those interests and so on. They deserve, however, an equal degree of transparency from governments in relation to those decisions that affect them. Relying on a right to access is almost certainly most likely to favour those who already have the greatest access either through their profession, their skills or their social capital. The use of freedom of information requests in those countries where they are available shows this to be the case, as they have overwhelmingly been used by journalists, with a smattering of representation from researchers, other politicians and lawyers and so on. In the UK between 2005 and 2010 the total number registered by all ‘ordinary’ members of the public is just ahead of journalists, the next largest group. The public are overwhelmingly outnumbered by the listed professional groups [i] .\n\nRequired publication, by contrast, presents an even playing field to all parties. Rather than allowing legislators to determine how and to whom – and for what – they should be accountable, a presumption in favour of publication makes them accountable to all. As a result, it is the only truly effective way of ensuring one of the key aims set out in favour of any freedom of information process.\n\n[i] Who Makes FOI Requests? BBC Open Secrets Website. 14 January 2011.\n", "title": "" }, { "docid": "5374802042af0cfbda4884a42493e865", "text": "governmental transparency house believes there should be presumption If public bodies do not have an obligation to publish information, there will always be a temptation to find any available excuses to avoid transparency.\n\nThe primary advantage of putting the duty on government to publish, rather than on citizens to enquire is that it does not require the citizen to know what they need to know before they know it. Publication en masse allows researchers to investigate areas they think are likely to produce results, specialists to follow decisions relevant to their field and, also, raises the possibility of discovering things by chance. The experience of Wikipedia suggests that even very large quantities of data are relatively easy to mine as long as all the related documentation is available to the researcher – the frustration, by contrast, comes when one has only a single datum with no way of contextualising it. Any other situation, at the very least, panders to the interests of government to find any available excuse for not publishing anything that it is likely to find embarrassing and, virtually by definition, would be of most interest to the active citizen.\n\nKnowing that accounts of discussions, records of payments, agreements with commercial bodies or other areas that might be of interest to citizens will be published with no recourse to ‘national security’ or ‘commercial sensitivity’ is likely to prevent abuses before they happen but will certainly ensure that they are discovered after the event [i] .\n\nThe publication of documents, in both Washington and London, relating to the build-up to war in Iraq is a prime example of where both governments used every available excuse to cover up the fact that that the advice they had been given showed that either they were misguided or had been deliberately lying [ii] . A presumption of publication would have prevented either of those from determining a matter of vital interest to the peoples of the UK, the US and, of course, Iraq. All three of those groups would have had access to the information were there a presumption of publication.\n\n[i] The Public’s Right To Know. Article 19 Global Campaign for Freedom of Expression.\n\n[ii] Whatreallyhappened.com has an overview of this an example of how politicians were misguided – wilfully or otherwise can be found in: Defector admits to lies that triggered the Iraq War. Martin Chulov and Helen Pidd. The Guardian. 15 February 2011.\n", "title": "" }, { "docid": "8c4c0fdbffcf784e055898595f30aa52", "text": "governmental transparency house believes there should be presumption A faster, cheaper and simpler process\n\nThere are cost concerned with processing FoI requests both in terms of time and cash terms. [i] To take one example Britain’s largest local authority, Birmingham, spends £800,000 a year dealing with FoI requests. [ii] There is also a delay from the point of view of the applicant. Such a delay is more than an irritant in the case of, for example, immigration appeals or journalistic investigations. Governments know that journalists usually have to operate within a window of time while a story is still ‘hot’. As a result all they have to do is wait it out until the attention of the media turns elsewhere to ensure that if evidence of misconduct or culpability were found, it would probably be buried as a minor story if not lost altogether. As journalism remains the primary method most societies have of holding government to account, it doesn’t seem unreasonable that the methodology for releasing data should, at least in part, reflect the reality of how journalism works as an industry.\n\n[i] Independent Review of the Impact of the Freedom of Information Act. Frontier Economics. October 2006.\n\n[ii] Dunton, Jim, ‘Cost of FoI requests rises to £34m’, Local Government Chronicle, 16 September 2010, http://www.lgcplus.com/briefings/corporate-core/legal/cost-of-foi-requests-rises-to-34m/5019109.article\n", "title": "" } ]
arguana
16fb056388e91a06f708ec10237dcf51
Translation gives access to students to learn valuable information and develop their human capital and to become academically and economically competitive The ability to access the wealth of knowledge being generated in the developed world would greatly impact the ability of students and budding academics in the developing world to develop their human capital and keep abreast of the most recent developments in the various fields of academic research. Lag is a serious problem in an academic world where the knowledge base is constantly developing and expanding. In many of the sciences, particularly those focused on high technology, information rapidly becomes obsolete as new developments supplant the old. The lag that occurs because developing countries' academics and professionals cannot readily access this new information results in their always being behind the curve. [1] Coupled with the fact that they possess fewer resources than their developed world counterparts, developing world institutions are locked in a constant game of catch-up they have found difficult, if not impossible, to break free of. By subsidizing this translation effort, students in these countries are able to learn with the most up-to-date information, academics are able to work with and build upon the most relevant areas of research, and professionals can keep with the curve of knowledge to remain competitive in an ever more global marketplace. An example of what can happen to a country cut off from the global stream of knowledge can be found in the Soviet Union. For decades Soviet academics were cut off from the rest of the world, and the result was a significant stunting of their academic development. [2] This translation would be a major boon for all the academic and professional bodies in developing countries. [1] Hide, W., ‘I Can No Longer Work for a System that Puts Profit Over Access to Research’, The Guardian. 2012. http://www.guardian.co.uk/science/blog/2012/may/16/system-profit-access-research [2] Shuster, S. “Putin’s PhD: Can a Plagiarism Probe Upend Russian Politics?”. Time. 28 February 2013, http://world.time.com/2013/02/28/putins-phd-can-a-plagiarism-probe-upend-russian-politics/
[ { "docid": "4c846cdea0bf59e97a545be2d0ab3d84", "text": "ity digital freedoms access knowledge house would subsidise translation While the world is globalizing, it is still in the interest of states to retain their relative competitive advantages. After all, the first duty of a state is to its own citizens. By translating these works and offering them to academics, students, and professionals, the developed world serves to erode one of its only advantages over the cheaper labour and industrial production markets of the developing world. The developed world relies on its advantage in technology particularly to maintain its position in the world and to have a competitive edge. Giving that edge up, which giving access to their information more readily does, is to increase the pace at which the developed world will be outmatched.\n", "title": "" } ]
[ { "docid": "0c49889ce8ba7f23e6ff1dfea5ef7595", "text": "ity digital freedoms access knowledge house would subsidise translation This translation effort does not pave the future with gold. Intellectual property law still persists and these countries would still be forced to deal with the technologies' originators in the developed world. By instead striving to engage on an even footing without special provisions and charity of translation, developing countries' academics can more effectively win the respect and cooperation of their developed world counterparts. In so doing they gain greater access to, and participation in, the developments of the more technologically advanced countries. They should strive to do so as equals, not supplicants.\n", "title": "" }, { "docid": "1b1b7d55f5bc79eb8a1818c31b02aae1", "text": "ity digital freedoms access knowledge house would subsidise translation Translating academic work for the developed world will not succeed in creating a dialogue between developed and developing world because the effort is inherently unidirectional. The developing world academics will be able to use the translated work, but will lack the ability to respond in a way that could be readily understood or accepted by their developed world counterparts. The only way to become a truly respected academic community is to engage with the global academic world on an even footing, even if that means devoting more resources to learning the dominant global academic languages, particularly English. This is what is currently happening and is what should be the trend for the future. [1] So long as they rely on subsidized work, the academics of the developing world remain subject and subordinate to those of the developed world.\n\n[1] Meneghini, Rogerio, and Packer, Abel L., ‘Is there science beyond English? Initiatives to increase the quality and visibility of non-English publications might help to break down language barriers in scientific communication’, EMBO Report, February 2007, Vol.8 No.2, pp.112-116, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1796769/\n", "title": "" }, { "docid": "956c98f5c9ec565db9ece90cf7f6f3a8", "text": "ity digital freedoms access knowledge house would subsidise translation If it is true that people cannot easily get jobs in the developed world for lack of language skills then there will surely still be a pressure to learn the language or languages of international discourse. What this policy offers is access by a much wider audience to the various benefits that expanded academic knowledge can offer. It will expand the developing world's knowledge base and not in any way diminish the desire to learn English and other dominant languages. It should be remembered that it is not just academics that use academic papers; students do as well, as do professionals in everyday life. Clearly there cannot be an expectation that everyone learns English to be able to access research. While there may be fewer languages in academic use there is not such a narrowing of language for everyone else.\n", "title": "" }, { "docid": "16efcd00866bb8d1e7950c0d22bc72c9", "text": "ity digital freedoms access knowledge house would subsidise translation In the status quo there is already some translation, due largely to current demands and academic relationships. Even if translation of all academic work the world over could not be translated into every conceivable language, expanding the number of articles and number of languages is certainly a good thing. While cost will limit the extent of the policy, it is still worth pursuing to further open the world of academic discourse.\n", "title": "" }, { "docid": "26ac54ae6e3e0873301df13a93158c4d", "text": "ity digital freedoms access knowledge house would subsidise translation Wealthy states do feel an obligation to less fortunate countries, as is demonstrated through their frequent use of aid and loans to poorer governments. This is a way to help countries stop being dependent on aid and hand-outs and instead develop their own human capital and livelihood by being able to engage with the cutting edge of technology and research.\n", "title": "" }, { "docid": "4fbddd53be7724179ec85ac7b682cb04", "text": "ity digital freedoms access knowledge house would subsidise translation Translation expands the knowledge base of citizens to help solve local problems\n\nIt is often the case that science and technology produced in the developed world finds its greatest application in the developing world. Sometimes new developments are meant for such use, as was the case with Norman Borlaug's engineering of dwarf wheat in order to end the Indian food crisis. Other times it is serendipitous, as academic work not meant of practical use, or tools that could not be best applied in developed world economies find ready application elsewhere, as citizens of the developing world turn the technologies to their needs. [1] By translating academic journals into the languages of developing countries, academics and governments can open a gold mine of ideas and innovation. The developing world still mostly lacks the infrastructure for large scale research and relies heavily on research produced in the developed world for its sustenance. Having access to the body of academic literature makes these countries less dependent on the academic mainstream, or to the few who can translate the work themselves. Having access to this research allows developing countries to study work done in the developed world and look at how the advances may be applicable to them. The more people are able to engage in this study the more likely it is that other uses for the research will be found.\n\n[1] Global Health Innovation Blog. ‘The East Meets West Foundation: Expanding Organizational Capacity”. Stanford Graduate School of Business. 18 October 2012, http://stanfordglobalhealth.com/2012/10/18/the-east-meets-west-foundation-expanding-organizational-capacity/\n", "title": "" }, { "docid": "1c87c3daff7292c3926b96a3a55d8958", "text": "ity digital freedoms access knowledge house would subsidise translation Translation allows greater participation by academics in global academia and global marketplace of ideas\n\nCommunication in academia is necessary to effectively engage with the work of their colleagues elsewhere in the world, and in sciences in particular there has become a lingua franca in English. [1] Any academic without the language is at a severe disadvantage. Institutions and governments of the Global North have the resources and wherewithal to translate any research that might strike their fancy. The same is not true for states and universities in the Global South which have far more limited financial and human capital resources. By subsidizing the translation of academic literature into the languages of developing countries the developed world can expand the reach and impact of its institutions' research. Enabling access to all the best academic research in multiple languages will mean greater cross-pollination of ideas and knowledge. Newton is supposed to have said we “stand upon the shoulders of giants” as all ideas are ultimately built upon a foundation of past work. [2] Language is often a barrier to understanding so translation helps to broaden the shoulders upon which academics stand.\n\nBy subsidizing the publication of their work into other significant languages, institutions can have a powerful impact on improving their own reputation and academic impact. Academic rankings such as the rankings by Shanghai Jiao Tong University, [3] and the Times Higher Education magazine [4] include research and paper citations as part of the criteria. Just as importantly it opens the door to an improved free flowing dialogue between academics around the world. This is particularly important today as the developing world becomes a centre of economic and scientific development. [5] This translation project will serve to aid in the development of relations between research institutes, such as in the case of American institutions developing partnerships with Chinese and Indian universities.\n\n[1] Meneghini, Rogerio, and Packer, Abel L., ‘Is there science beyond English? Initiatives to increase the quality and visibility of non-English publications might help to break down language barriers in scientific communication’, EMBO Report, February 2007, Vol.8 No.2, pp.112-116, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1796769/\n\n[2] Yong, Ed, ‘Why humans stand on giant shoulders, but chimps and monkeys don’t’, Discover, 1 March 2012, http://blogs.discovermagazine.com/notrocketscience/2012/03/01/why-humans-stand-on-giant-shoulders-but-chimps-and-monkeys-dont/#.UaYm_7XVB8E\n\n[3] ‘Ranking Methodology’, Academic Ranking of World Universities, 2012, http://www.shanghairanking.com/ARWU-Methodology-2012.html\n\n[4] Baty, Phil, ‘World University Rankings subject tables: Robust, transparent and sophisticated’, Times Higher Education, 16 September 2010, http://www.timeshighereducation.co.uk/world-university-rankings/2010-11/world-ranking/analysis/methodology\n\n[5] ‘Science and Engineering Indicators, 2012’. National Science Foundation. 2012, http://www.nsf.gov/statistics/seind12/c5/c5h.htm\n", "title": "" }, { "docid": "2ca0142ea8d81268f61339a60767249c", "text": "ity digital freedoms access knowledge house would subsidise translation The West has no particular obligation to undergo such a sweeping policy\n\nGovernments and academic institutions have no special duty to give full access to all information that they generate and publish in academic journals to anyone who might want it. If they want to make their research public that is their prerogative, but it does not follow that they should then be expected to translate that work into an endless stream of different languages. If there is a desire by governments and institutions to aid in the academic development of the developing world, there are other ways to go about it than indiscriminately publishing their results and research into developing world languages. Taking on promising students through scholarships, or developing strategic partnerships with institutions in the global south are more targeted, less piecemeal means of sharing the body of global knowledge for example the National Institute of Environmental Health Sciences funds junior scientists from the developing world working in their labs. [1] States owe their first duty to their own citizens, and when the research they produce is not only made available to citizens of other countries but translated at some expense, they are not serving that duty well. It will prove to be a fairly ineffective education policy.\n\n[1] ‘Building Research Capacity in Developing Nations’, Environmental Health Perspectives, Vol 114, No. 10, October 2006, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1626416/\n", "title": "" }, { "docid": "a1253927c30c0be8e596c6ee039274bc", "text": "ity digital freedoms access knowledge house would subsidise translation It is better to have fewer languages in common use in global academic and economic interrelations\n\nA proliferation of languages in academia will serve to fracture the interrelations of academics, not unify them. As more and more academics and innovators interested in new academic developments find it possible to obtain information wholly in their native languages, then the impetus toward unification in a primary language of academia and commerce will be slowed or entirely thwarted. Through history there have been movements toward this sort of linguistic unity, because it reduces the physical and temporal costs of information exchange; for example scholars throughout Early Modern Europe communicated in Latin. [1] This policy serves only to dampen this movement, which will, even if helpful to people in the short-run, serve to limit the capacity of developing world academics to engage with the developed world. Today English has become the definitive language of both international academic discourse and commerce. In France for example, a country known for its protective stance towards its language, journals have been changing to publishing in English rather than French; the journal Research in Virology changed in 1989 as almost 100% of their articles were submitted in English compared to only 15% in 1973. [2]\n\nThe trend towards one language is a positive one, because it has meant more movers and shakers in various countries have all been able to better and more quickly understand one another's desires and actions leading to more profitable and peaceful outcomes generally. [3] Also important is the fact that while academics and other interested parties in the developing world may be able to grapple with academic work more effectively once translated for them, they now have a greater disadvantage due to the enervating effects this translation produces. Without the positive impetus to learn the major language or languages of international discourse, developing world academics will never be able to get posts and lectureships at institutions in the developed world, or to take part in joint research in real time. The convergence of language ultimately serves to promote common understanding, which means people from the developing world can more effectively move between their home country and others. It also helps build a common lexicon of terms that will be more robust for international use, as opposed to translations, which are often imperfect due to divergences of linguistic concepts and thus susceptible to mistake.\n\n[1] Koenigsberger, H. G., Mosse, George L., and Bowler, G. Q., Europe in the Sixteenth Century, London, 2nd Edn, 1989, p.377\n\n[2] Garfield, Eugene, ‘The English Language: The Lingua Franca Of International Science’, The Ceisntist, 15 May 1989, http://www.the-scientist.com/?articles.view/articleNo/10374/title/The-English-Language--The-Lingua-Franca-Of-International-Science/\n\n[3] Bakopoulos, D. ‘English as Universal Academic Language: Good or Bad?’. The University Record, 1997, Available: http://www.ur.umich.edu/9697/Jan28_97/artcl18.htm\n", "title": "" }, { "docid": "d764921b59e71f2071d62c9b43679996", "text": "ity digital freedoms access knowledge house would subsidise translation It is prohibitively expensive to translate everything and difficult to prioritize what to translate\n\nUltimately any policy of translation of academic work must rely on a degree of prioritization on the part of the translators since there is no way that all academic work of any kind could be translated into other major languages, let alone into all the multitude of languages extant in the world today. In 2009, for example, the number of published research papers on science and technology exceeded 700,000. [1] That is a gigantic amount of research. Translating all of these articles seems to be an obvious waste of time and resources for any government or institution to pursue and increasingly so when one considers the more than 30,000 languages in current use today. Translations today currently exist for articles and research that is considered useful. Any blanket policy is infeasible. The end result will be only a small number of articles translated into a finite number of languages. This is the status quo. Expanding it only serves to further confuse the academic community and to divert useful energies away from positive research to the quixotic task of translation.\n\n[1] ‘Science and Engineering Indicators, 2012’. National Science Foundation. 2012, http://www.nsf.gov/statistics/seind12/c5/c5h.htm\n", "title": "" } ]
arguana
a152ce39189a3f2792d234edfa9d6c13
Overlong copyright protection stifles the creativity and saps the time of artists In some instances, when artists achieve success they face the enervating impulse that their achievement brings. They become satisfied and complacent with what they have, robbing them of their demiurgic drive. Worse, and more frequently, successful artists become embroiled in defending their work from pirates, downloaders, and other denizens of the internet. The result is artists wasting time in court, fighting lawsuits that sap them of time to actually focus on creating new works. Artists should be incentivized to look forward, not spend their time clinging to what they have already made. Obviously, they have a right to profit from their work to an extent, which is why a certain, reduced length of copyright is still important. But clearly the current length is far too great as artists retain their copyright until their death and many years after. Moreover once the artist has died it is difficult to see how copyright can be considered to be enhancing or even rewarding creativity; it simply becomes a negative weight on others creativity.
[ { "docid": "8a72389f64db15b09f7b1968cadd3e9e", "text": "intellectual property house would cut length copyright protection The artistic drive to create is rarely stifled by having been successful. Individuals deserve to profit from their success and to retain control of what they create in their lifetime, as much as the founder of a company deserves to own what he or she creates until actively deciding to part with it. However, even patents, novel creations in themselves, have far less protection than copyright. While most patents offer protection for a total of twenty years, copyright extends far beyond the life of its creator, a gross overstretch of the right of use. [1]\n\n[1] Posner, Richard A., “Patent Trolls Be Gone”, Slate, 15 October 2012, http://www.slate.com/articles/news_and_politics/view_from_chicago/2012/10/patent_protection_how_to_fix_it.html\n", "title": "" } ]
[ { "docid": "6db314884c8666b59bc590264fbfb18b", "text": "intellectual property house would cut length copyright protection Inefficient or not, artists should have the right to retain control of their creations. Even if they are not making any money out of it, they still have the right, and often the desire, to maintain control of the way their art is used. If artists do not desire such control, they can opt to release their works into the public domain, while allowing those who do not wish to do so to protect their work.\n", "title": "" }, { "docid": "87322516994169134959d016dfe6f5c9", "text": "intellectual property house would cut length copyright protection While there is value in other artists exploring their own creativity by means of others’ work, it does not give them an overriding right. Rather, artists should have a meaningful control over how their art is disseminated and viewed in the world, as it is ultimately their creation. Furthermore, the protections copyright affords means that the responses that do arise must be more creative and novel in and of themselves, and not simply hackneyed riffing on existing work. This helps to benefit the arts by ensuring that there is regular innovation and change.\n", "title": "" }, { "docid": "4d28053593a68c3296a19a677beb45b6", "text": "intellectual property house would cut length copyright protection The problems associated with “orphan works” can be sorted out separate from limiting copyright length. It simply demands a closer attention from executors and legal professionals to sort these issues out. In terms of availability, it must be up to the artist to release the work as he or she sees fit. Encouraging artists and their successors to release their works into the public domain could go a long way to solving this problem without recourse to adulterating existing protections.\n", "title": "" }, { "docid": "1f6743d02deb8cff96c272b91d9a22d6", "text": "intellectual property house would cut length copyright protection Copyright would still exist, and the artist is able to profit from it, even if the length of copyright is reduced. People deserve recompense, but the stifling force of current laws make for negative outcomes. It would be better to strike a more appropriate balance, allowing artists to profit while they can, which in practice is only during the first few years after their work’s release, and at the same time allowing the art to reach the public sphere and to interact with it in fuller fashion.\n", "title": "" }, { "docid": "0916667f4f8ab4757577524ad7c71161", "text": "intellectual property house would cut length copyright protection Artists generally desire to create, and will do so whether there is financial incentive or not. Besides, many artists live and die in relative poverty, [1] yet their experience seems to not have put off people from pursuing art as a profession and passion. The loss of a few marginal cases must be weighed against the massive losses to art in general, such as the huge curtailment of exploration of and response to existing works, which are often artistically meritorious in their own right, and also the rendering unavailable of much of the artistic output of the world.\n\n[1] The Economist, “Art for money’s sake”, 27 May 2004, http://www.economist.com/node/2714114\n", "title": "" }, { "docid": "723feb4a1aeb234fd77f331664536f03", "text": "intellectual property house would cut length copyright protection The vast majority of artistic output results in having little lifelong, let alone postmortem economic value. Most artists glean all they are going to get out of their art within a couple years of its production, and the idea that it will sustain their families is silly. In the small number of cases of phenomenally successful artists, they usually make enough to sustain themselves and family, but even still, the benefits accrued to outliers should not be sufficient reason to significantly slow the pace of artistic progress and cross-pollination of ideas. Besides, in any other situation in which wealth is bequeathed, that money must have been earned already. Copyright is a bizarre construct that allows for the passing on of the right to accrue future wealth.\n", "title": "" }, { "docid": "9bf7b47c88c5d61ae6ccb831df70b137", "text": "intellectual property house would cut length copyright protection Once a piece of art enters the public sphere, it takes on a character of its own as it is consumed, absorbed, and assimilated by other artists. It is important that art as a whole be able to thrive in society, but this is only possible when artists are able to make use of, and actively reinterpret and utilize existing works. This can only be furthered by a significant reduction in length of copyright protections. It is also disingenuous to suggest that the artist’s work is not itself the product of exposure to other artists’ work. All art is a response, even if only laterally, to the previous traditions. While those who gain a copyright get it because of a ‘novel concept’ it is open to question just how novel this has to be. A painter who paints a new painting in a style never seen before may well still be using oil and canvas just as thousands of artists have in the past.\n", "title": "" }, { "docid": "56279f6d062cb5b1bd8bf71857445ea8", "text": "intellectual property house would cut length copyright protection Lengthy copyright protection is extremely inefficient for the dissemination of works\n\nOnly a tiny fraction of copyrighted works ever become massive successes, breeding the riches of a JK Rowling or the like. Far more often, artists only make modest profits from their artistic works. In fact, almost all income from copyright comes immediately after publication of a work. [1] Ultimately, copyright serves to protect a work from being used, while at the same time that work does little to benefit the original artist. Freeing up availability of artistic works much faster would serve to benefit consumers in the extreme, who could now enjoy the works for free and engage in the dissemination and reexamination of the works. If artists care about having their work seen and appreciated, they should realize that they are best served by reduced copyright. Ultimately, long copyrights tend only to benefit corporations that buy up large quantities of work, and exploit it after artists’ deaths. Notably when the United States has a system that required a renewal of copyright after 28 years only 15% of copyrights were actually renewed. [2] It would be far better for everyone that copyright be shortened and to increase appreciation of works.\n\n[1] Gapper, J. “Shorten Copyright and Make it Stick”. Financial Times. 1 July 2010 http://www.ft.com/cms/s/0/c446aa38-84a7-11df-9cbb-00144feabdc0.html#axzz2JyVnvY00\n\n[2] Center for the Study of the Public Domain, “What Could Have Entered the Public Domain on January 1, 2012?”, Duke University, 2012, http://web.law.duke.edu/cspd/publicdomainday/2012/pre-1976\n", "title": "" }, { "docid": "6cd68abd7a74b07aa5b6eb30a04a5afa", "text": "intellectual property house would cut length copyright protection Long copyrights serve to severely limit access by the public to creative works\n\nBecause copyrights are so long, they often result in severely limiting access to some works by anyone. Many “orphan works”, whose copyright holders are unknown, cannot be made available online or in other free format due to copyright protection. This is a major problem, considering that 40% of all books fall into this category. [1] A mix of confusion over copyright ownership and unwillingness of owners to release their works, often because it would not be commercially viable to do so, means that only 2% of all works currently protected by copyright are commercially available. [2] The public is robbed of a vast quantity of artistic work, often simply because no one can or is willing to publish it even in a commercial context. Reducing copyright length would go a long way to freeing this work for public consumption.\n\n[1] Keegan, V. “Shorter Copyright Would Free Creativity”. The Guardian. 7 October 2009, http://www.guardian.co.uk/technology/2009/oct/07/shorter-copyright-term\n\n[2] ibid\n", "title": "" }, { "docid": "e57f6fafab19f06a6078bc47322f28a8", "text": "intellectual property house would cut length copyright protection Long copyright stifles creative responses to and re-workings of the original work\n\nArtistic creations, be they books, films, paintings, etc. serve as a spark for others to explore their own creativity. Much of the great works of art of the 20th century, like Disney films reworking ancient fairy tales, were reexaminations of existing works. [1] That is the nature of artistic endeavor, and cutting it off by putting a fence around works of art serves to cut off many avenues of response and expression. When copyright is too long, the work passes beyond the present into a new status quo other than that in which it was made. This means contemporary responses and riffs on works are very difficult, or even impossible. In the United States tough copyright law has prevented the creation of a DJ/remix industry because the costs of such remixing is prohibitive. [2] While a certain length of copyright is important, it is also critical for the expression of art to develop that it occur within a not overlong time. Furthermore, it is valuable for artists to experience the responses to their own work, and to thus be able to become a part of the discourse that develops, rather than simply be dead, and thus voiceless.\n\n[1] Keegan, V. “Shorter Copyright Would Free Creativity”. The Guardian. 7 October 2009, http://www.guardian.co.uk/technology/2009/oct/07/shorter-copyright-term\n\n[2] Jordan, Jim, and Teller, Paul, “RSC Policy Brief: Three Myths about Copyright Law and Where to Start to Fix it” The Republican Study Committee, 16 November 2012, http://infojustice.org/wp-content/uploads/2012/11/three-myths.pdf\n", "title": "" }, { "docid": "9ee2da429bcf382887c62ed53dc608c8", "text": "intellectual property house would cut length copyright protection Control of an artistic work and its interaction in the public sphere is the just province of the creator and his or her designated successors\n\nThe creator of a piece of copyrighted material has brought forth a novel concept and product of the human mind. That artist thus should have a power over that work’s use. Art is the expression of its creator’s sense of understanding of the world, and thus that expression will always have special meaning to him or her. How that work is then used thus remains an active issue for the artist, who should, as a matter of justice be able to retain a control over its dissemination. That control can extend, as with the bequeathing of tangible assets, to designated successors, be the trusts, family, or firms. In carrying out the wishes of the artist, these successors can safeguard that legacy in their honor. Many artists care about their legacies and the future of their artistic works, and should thus have this protection furnished by the state through the protection of lengthy copyrights.\n", "title": "" }, { "docid": "36c368aa5ba5533d21a256e1e636ab8c", "text": "intellectual property house would cut length copyright protection The promise of copyright protection galvanizes people to develop creative endeavors\n\nThe incentive to profit drives a great deal of people’s intellectual endeavours. Without the guarantee of ownership over one’s artistic work, the incentive to invest in its creation is significantly diminished. Within a robust copyright system, individuals feel free to invest time in their pursuits because they have full knowledge that the fruits of their efforts will be theirs to reap. [1] With these protections the marginal cases, like people afraid to put time into actually writing a novel rather than doing more hours at their job, will take the opportunity. Even if the number of true successes is very small in the whole of artistic output, the chance of riches and fame can be enough for people to make the gamble. If their work were to quickly leave their control, they would be less inclined to do so. Furthermore, the inability of others to simply duplicate existing works as their own means they too will be galvanized to break ground on new ideas, rather than simply re-tread over current ideas.\n\n[1] Greenberg, M. “Reason or Madness: A Defense of Copyright’s Growing Pains”. John Marshall Review of Intellectual Property Law. 2007, http://www.jmripl.com/Publications/Vol7/Issue1/Greenberg.pdf\n", "title": "" }, { "docid": "c75d0bca48281d2eee2e8df140071779", "text": "intellectual property house would cut length copyright protection Artists deserve to profit from their work and copyright provides just recompense\n\nArtists generating ideas and using their effort to produce an intangible good, be it a new song, painting, film, etc. have a property right over those ideas and the products that arise from them. It is the effort to produce a real good, albeit an intangible one, that marks the difference between an idea in someone’s head that he or she does not act upon, and an artistic creation brought forth into the world. Developing new inventions, songs, and brands are all very intensive endeavours, taking time, energy, and often a considerable amount of financial investment, if only from earnings forgone in the time necessary to produce the work. Artists deserve as a matter of principle to benefit from the products of the effort of creation. [1] For this reason, robbing individuals of lifelong and transferable copyright is tantamount to stealing an actual physical product. Each is a real thing, even if one can be touched while the other is intangible in a physical sense. Copyright is the only real scheme that can provide the necessary protection for artists to allow them to enjoy the fruits of their very real labours.\n\n[1] Greenberg, M. “Reason or Madness: A Defense of Copyright’s Growing Pains”. John Marshall Review of Intellectual Property Law. 2007, http://www.jmripl.com/Publications/Vol7/Issue1/Greenberg.pdf\n", "title": "" }, { "docid": "ac4e6835b72efd7acb91a54c2f30a1d5", "text": "intellectual property house would cut length copyright protection Artists often rely on copyright protection to support dependents and family after, including after they are dead\n\nArtists may rely on their creative output to support themselves. This is certainly no crime, and existing copyright laws recognize this fact. Artists rarely have pensions of the sort that people in other professions have as they are rarely employed by anyone for more than a short period. [1] As a result artists who depend on their creations for their wherewithal look to their art and copyright as a guaranteed pension, a financial protection they can rely on even if they are too old to continue artistic or other productive work for their upkeep. They also recognize the need of artists to be able to support their dependents, many of whom too rely on the artist’s output. In the same way financial assets like stocks can be bequeathed to people for them to profit, so too must copyright be. Copyright is a very real asset and financial protection that should be sustained for the sake of artists’ financial wellbeing and that of their loved ones.\n\n[1] The Economist, “Art for money’s sake”, 27 May 2004, http://www.economist.com/node/2714114\n", "title": "" } ]
arguana
c11aa608224b72df0f8281cb868bdd20
The promise of copyright protection galvanizes people to develop creative endeavors The incentive to profit drives a great deal of people’s intellectual endeavours. Without the guarantee of ownership over one’s artistic work, the incentive to invest in its creation is significantly diminished. Within a robust copyright system, individuals feel free to invest time in their pursuits because they have full knowledge that the fruits of their efforts will be theirs to reap. [1] With these protections the marginal cases, like people afraid to put time into actually writing a novel rather than doing more hours at their job, will take the opportunity. Even if the number of true successes is very small in the whole of artistic output, the chance of riches and fame can be enough for people to make the gamble. If their work were to quickly leave their control, they would be less inclined to do so. Furthermore, the inability of others to simply duplicate existing works as their own means they too will be galvanized to break ground on new ideas, rather than simply re-tread over current ideas. [1] Greenberg, M. “Reason or Madness: A Defense of Copyright’s Growing Pains”. John Marshall Review of Intellectual Property Law. 2007, http://www.jmripl.com/Publications/Vol7/Issue1/Greenberg.pdf
[ { "docid": "0916667f4f8ab4757577524ad7c71161", "text": "intellectual property house would cut length copyright protection Artists generally desire to create, and will do so whether there is financial incentive or not. Besides, many artists live and die in relative poverty, [1] yet their experience seems to not have put off people from pursuing art as a profession and passion. The loss of a few marginal cases must be weighed against the massive losses to art in general, such as the huge curtailment of exploration of and response to existing works, which are often artistically meritorious in their own right, and also the rendering unavailable of much of the artistic output of the world.\n\n[1] The Economist, “Art for money’s sake”, 27 May 2004, http://www.economist.com/node/2714114\n", "title": "" } ]
[ { "docid": "1f6743d02deb8cff96c272b91d9a22d6", "text": "intellectual property house would cut length copyright protection Copyright would still exist, and the artist is able to profit from it, even if the length of copyright is reduced. People deserve recompense, but the stifling force of current laws make for negative outcomes. It would be better to strike a more appropriate balance, allowing artists to profit while they can, which in practice is only during the first few years after their work’s release, and at the same time allowing the art to reach the public sphere and to interact with it in fuller fashion.\n", "title": "" }, { "docid": "723feb4a1aeb234fd77f331664536f03", "text": "intellectual property house would cut length copyright protection The vast majority of artistic output results in having little lifelong, let alone postmortem economic value. Most artists glean all they are going to get out of their art within a couple years of its production, and the idea that it will sustain their families is silly. In the small number of cases of phenomenally successful artists, they usually make enough to sustain themselves and family, but even still, the benefits accrued to outliers should not be sufficient reason to significantly slow the pace of artistic progress and cross-pollination of ideas. Besides, in any other situation in which wealth is bequeathed, that money must have been earned already. Copyright is a bizarre construct that allows for the passing on of the right to accrue future wealth.\n", "title": "" }, { "docid": "9bf7b47c88c5d61ae6ccb831df70b137", "text": "intellectual property house would cut length copyright protection Once a piece of art enters the public sphere, it takes on a character of its own as it is consumed, absorbed, and assimilated by other artists. It is important that art as a whole be able to thrive in society, but this is only possible when artists are able to make use of, and actively reinterpret and utilize existing works. This can only be furthered by a significant reduction in length of copyright protections. It is also disingenuous to suggest that the artist’s work is not itself the product of exposure to other artists’ work. All art is a response, even if only laterally, to the previous traditions. While those who gain a copyright get it because of a ‘novel concept’ it is open to question just how novel this has to be. A painter who paints a new painting in a style never seen before may well still be using oil and canvas just as thousands of artists have in the past.\n", "title": "" }, { "docid": "6db314884c8666b59bc590264fbfb18b", "text": "intellectual property house would cut length copyright protection Inefficient or not, artists should have the right to retain control of their creations. Even if they are not making any money out of it, they still have the right, and often the desire, to maintain control of the way their art is used. If artists do not desire such control, they can opt to release their works into the public domain, while allowing those who do not wish to do so to protect their work.\n", "title": "" }, { "docid": "87322516994169134959d016dfe6f5c9", "text": "intellectual property house would cut length copyright protection While there is value in other artists exploring their own creativity by means of others’ work, it does not give them an overriding right. Rather, artists should have a meaningful control over how their art is disseminated and viewed in the world, as it is ultimately their creation. Furthermore, the protections copyright affords means that the responses that do arise must be more creative and novel in and of themselves, and not simply hackneyed riffing on existing work. This helps to benefit the arts by ensuring that there is regular innovation and change.\n", "title": "" }, { "docid": "4d28053593a68c3296a19a677beb45b6", "text": "intellectual property house would cut length copyright protection The problems associated with “orphan works” can be sorted out separate from limiting copyright length. It simply demands a closer attention from executors and legal professionals to sort these issues out. In terms of availability, it must be up to the artist to release the work as he or she sees fit. Encouraging artists and their successors to release their works into the public domain could go a long way to solving this problem without recourse to adulterating existing protections.\n", "title": "" }, { "docid": "8a72389f64db15b09f7b1968cadd3e9e", "text": "intellectual property house would cut length copyright protection The artistic drive to create is rarely stifled by having been successful. Individuals deserve to profit from their success and to retain control of what they create in their lifetime, as much as the founder of a company deserves to own what he or she creates until actively deciding to part with it. However, even patents, novel creations in themselves, have far less protection than copyright. While most patents offer protection for a total of twenty years, copyright extends far beyond the life of its creator, a gross overstretch of the right of use. [1]\n\n[1] Posner, Richard A., “Patent Trolls Be Gone”, Slate, 15 October 2012, http://www.slate.com/articles/news_and_politics/view_from_chicago/2012/10/patent_protection_how_to_fix_it.html\n", "title": "" }, { "docid": "9ee2da429bcf382887c62ed53dc608c8", "text": "intellectual property house would cut length copyright protection Control of an artistic work and its interaction in the public sphere is the just province of the creator and his or her designated successors\n\nThe creator of a piece of copyrighted material has brought forth a novel concept and product of the human mind. That artist thus should have a power over that work’s use. Art is the expression of its creator’s sense of understanding of the world, and thus that expression will always have special meaning to him or her. How that work is then used thus remains an active issue for the artist, who should, as a matter of justice be able to retain a control over its dissemination. That control can extend, as with the bequeathing of tangible assets, to designated successors, be the trusts, family, or firms. In carrying out the wishes of the artist, these successors can safeguard that legacy in their honor. Many artists care about their legacies and the future of their artistic works, and should thus have this protection furnished by the state through the protection of lengthy copyrights.\n", "title": "" }, { "docid": "c75d0bca48281d2eee2e8df140071779", "text": "intellectual property house would cut length copyright protection Artists deserve to profit from their work and copyright provides just recompense\n\nArtists generating ideas and using their effort to produce an intangible good, be it a new song, painting, film, etc. have a property right over those ideas and the products that arise from them. It is the effort to produce a real good, albeit an intangible one, that marks the difference between an idea in someone’s head that he or she does not act upon, and an artistic creation brought forth into the world. Developing new inventions, songs, and brands are all very intensive endeavours, taking time, energy, and often a considerable amount of financial investment, if only from earnings forgone in the time necessary to produce the work. Artists deserve as a matter of principle to benefit from the products of the effort of creation. [1] For this reason, robbing individuals of lifelong and transferable copyright is tantamount to stealing an actual physical product. Each is a real thing, even if one can be touched while the other is intangible in a physical sense. Copyright is the only real scheme that can provide the necessary protection for artists to allow them to enjoy the fruits of their very real labours.\n\n[1] Greenberg, M. “Reason or Madness: A Defense of Copyright’s Growing Pains”. John Marshall Review of Intellectual Property Law. 2007, http://www.jmripl.com/Publications/Vol7/Issue1/Greenberg.pdf\n", "title": "" }, { "docid": "ac4e6835b72efd7acb91a54c2f30a1d5", "text": "intellectual property house would cut length copyright protection Artists often rely on copyright protection to support dependents and family after, including after they are dead\n\nArtists may rely on their creative output to support themselves. This is certainly no crime, and existing copyright laws recognize this fact. Artists rarely have pensions of the sort that people in other professions have as they are rarely employed by anyone for more than a short period. [1] As a result artists who depend on their creations for their wherewithal look to their art and copyright as a guaranteed pension, a financial protection they can rely on even if they are too old to continue artistic or other productive work for their upkeep. They also recognize the need of artists to be able to support their dependents, many of whom too rely on the artist’s output. In the same way financial assets like stocks can be bequeathed to people for them to profit, so too must copyright be. Copyright is a very real asset and financial protection that should be sustained for the sake of artists’ financial wellbeing and that of their loved ones.\n\n[1] The Economist, “Art for money’s sake”, 27 May 2004, http://www.economist.com/node/2714114\n", "title": "" }, { "docid": "56279f6d062cb5b1bd8bf71857445ea8", "text": "intellectual property house would cut length copyright protection Lengthy copyright protection is extremely inefficient for the dissemination of works\n\nOnly a tiny fraction of copyrighted works ever become massive successes, breeding the riches of a JK Rowling or the like. Far more often, artists only make modest profits from their artistic works. In fact, almost all income from copyright comes immediately after publication of a work. [1] Ultimately, copyright serves to protect a work from being used, while at the same time that work does little to benefit the original artist. Freeing up availability of artistic works much faster would serve to benefit consumers in the extreme, who could now enjoy the works for free and engage in the dissemination and reexamination of the works. If artists care about having their work seen and appreciated, they should realize that they are best served by reduced copyright. Ultimately, long copyrights tend only to benefit corporations that buy up large quantities of work, and exploit it after artists’ deaths. Notably when the United States has a system that required a renewal of copyright after 28 years only 15% of copyrights were actually renewed. [2] It would be far better for everyone that copyright be shortened and to increase appreciation of works.\n\n[1] Gapper, J. “Shorten Copyright and Make it Stick”. Financial Times. 1 July 2010 http://www.ft.com/cms/s/0/c446aa38-84a7-11df-9cbb-00144feabdc0.html#axzz2JyVnvY00\n\n[2] Center for the Study of the Public Domain, “What Could Have Entered the Public Domain on January 1, 2012?”, Duke University, 2012, http://web.law.duke.edu/cspd/publicdomainday/2012/pre-1976\n", "title": "" }, { "docid": "6cd68abd7a74b07aa5b6eb30a04a5afa", "text": "intellectual property house would cut length copyright protection Long copyrights serve to severely limit access by the public to creative works\n\nBecause copyrights are so long, they often result in severely limiting access to some works by anyone. Many “orphan works”, whose copyright holders are unknown, cannot be made available online or in other free format due to copyright protection. This is a major problem, considering that 40% of all books fall into this category. [1] A mix of confusion over copyright ownership and unwillingness of owners to release their works, often because it would not be commercially viable to do so, means that only 2% of all works currently protected by copyright are commercially available. [2] The public is robbed of a vast quantity of artistic work, often simply because no one can or is willing to publish it even in a commercial context. Reducing copyright length would go a long way to freeing this work for public consumption.\n\n[1] Keegan, V. “Shorter Copyright Would Free Creativity”. The Guardian. 7 October 2009, http://www.guardian.co.uk/technology/2009/oct/07/shorter-copyright-term\n\n[2] ibid\n", "title": "" }, { "docid": "e7c451cef42dea040e6e286c1813b128", "text": "intellectual property house would cut length copyright protection Overlong copyright protection stifles the creativity and saps the time of artists\n\nIn some instances, when artists achieve success they face the enervating impulse that their achievement brings. They become satisfied and complacent with what they have, robbing them of their demiurgic drive. Worse, and more frequently, successful artists become embroiled in defending their work from pirates, downloaders, and other denizens of the internet. The result is artists wasting time in court, fighting lawsuits that sap them of time to actually focus on creating new works. Artists should be incentivized to look forward, not spend their time clinging to what they have already made. Obviously, they have a right to profit from their work to an extent, which is why a certain, reduced length of copyright is still important. But clearly the current length is far too great as artists retain their copyright until their death and many years after. Moreover once the artist has died it is difficult to see how copyright can be considered to be enhancing or even rewarding creativity; it simply becomes a negative weight on others creativity.\n", "title": "" }, { "docid": "e57f6fafab19f06a6078bc47322f28a8", "text": "intellectual property house would cut length copyright protection Long copyright stifles creative responses to and re-workings of the original work\n\nArtistic creations, be they books, films, paintings, etc. serve as a spark for others to explore their own creativity. Much of the great works of art of the 20th century, like Disney films reworking ancient fairy tales, were reexaminations of existing works. [1] That is the nature of artistic endeavor, and cutting it off by putting a fence around works of art serves to cut off many avenues of response and expression. When copyright is too long, the work passes beyond the present into a new status quo other than that in which it was made. This means contemporary responses and riffs on works are very difficult, or even impossible. In the United States tough copyright law has prevented the creation of a DJ/remix industry because the costs of such remixing is prohibitive. [2] While a certain length of copyright is important, it is also critical for the expression of art to develop that it occur within a not overlong time. Furthermore, it is valuable for artists to experience the responses to their own work, and to thus be able to become a part of the discourse that develops, rather than simply be dead, and thus voiceless.\n\n[1] Keegan, V. “Shorter Copyright Would Free Creativity”. The Guardian. 7 October 2009, http://www.guardian.co.uk/technology/2009/oct/07/shorter-copyright-term\n\n[2] Jordan, Jim, and Teller, Paul, “RSC Policy Brief: Three Myths about Copyright Law and Where to Start to Fix it” The Republican Study Committee, 16 November 2012, http://infojustice.org/wp-content/uploads/2012/11/three-myths.pdf\n", "title": "" } ]
arguana
41ee24fb8bd22da943e15362c2fa0918
Students would be able to benefit from being able to use resources at other universities Having paid for access to universities and the materials they provide for research students have a right to expect that they will have all the necessary materials available. Unfortunately this is not always the case. University libraries are unable to afford all the university journals they wish to have access to or need for their courses. Therefore any student who wants to go into areas not anticipated by the course they are enrolled with will find that they do not have access to the materials they require. They then face the cost of getting individual access to an online journal article which can be up to $42, despite there being almost zero marginal cost to the publisher. [1] This even affects the biggest and best resourced university libraries. Robert Darnton the director of Harvard University’s library which pays $3.5million per year for journal articles says “The system is absurd” and “academically restrictive” instead “the answer will be open-access journal publishing”. [2] [1] Sciverse, “Pay-per-view”, Elsevier, http://www.info.sciverse.com/sciencedirect/buying/individual_article_purchase_options/ppv [2] Sample, Ian, “Harvard University says it can’t afford journal publishers’ prices”, The Guardian, 24 April 2012. http://www.guardian.co.uk/science/2012/apr/24/harvard-university-journal-publishers-prices
[ { "docid": "5e712f61850383eaacb8f05add810273", "text": "ity digital freedoms access knowledge universities should make all Most students most of the time stick to the core areas of their course and thus are not likely to encounter difficulties with finding the relevant information. For those who do require resources that the university library does not have access to they can use interlibrary loan for a small fee to cover the cost of sending the book or article between universities. [1] The universities in most countries can therefore effectively split the cost of access by specialising in certain subjects which limits the number of journals they need to buy while making the resources available to their students if they really need them.\n\n[1] Anon., “Inter-library loans” Birkbeck University of London. http://www.bbk.ac.uk/lib/about/how/ill/illguide Within the UK Cambridge charges £3 to £6, http://www.bbk.ac.uk/lib/about/how/ill/illguide in Europe the University of Vienna charges €2 http://bibliothek.univie.ac.at/english/interlibrary_loans.html while the United States is higher with Yale charging between $20-30 http://www.library.yale.edu/ill/\n", "title": "" } ]
[ { "docid": "32a528019381b7394f60293b6cba3efd", "text": "ity digital freedoms access knowledge universities should make all Public funding does not mean that everything should be free and open to use by the public. We do not expect to be allowed to use buildings that are built as government offices as if they were our own. The government builds large amounts of infrastructure such as airports and railways but we don’t expect to be able to use them for free.\n", "title": "" }, { "docid": "ecbaac5b29e7a9189b7c32a87ae49be7", "text": "ity digital freedoms access knowledge universities should make all Open access makes little difference to research. If an academic needs to use an article they don’t have access to they can pay for it and gain access quickly and efficiently.\n\nThe benefits to the economy may also be overstated; we don’t know how much benefit it will create. But we do know it would be badly damaging to the academic publishing industry. We also know there are risks with putting everything out in the open as economies that are currently research leaders will be handing out their advances for free. There is an immense amount of stealing of intellectual property, up to $400 billion a year, so research is obviously considered to be economically worth something. [1] With open access the proposal is instead to make everything available for free for others to take as and when they wish.\n\n[1] Permanent Select Committee on Intelligence, “Backgrounder on the Rogers-Ruppersberger Cybersecurity Bill”, U.S. House of Representatives, http://intelligence.house.gov/backgrounder-rogers-ruppersberger-cybersecurity-bill\n", "title": "" }, { "docid": "4e7a566f40f698d67b4fbc2030a0e074", "text": "ity digital freedoms access knowledge universities should make all Making these academic materials available to the general public does not mean they are useful to anyone. Many of the materials universities produce are not useful unless the reader has attended the relevant lectures. Rather than simply putting those lectures that are recorded and course handbooks online what is needed to open up education is systematically designed online courses that are available to all. Unfortunately what this provides will be a profusion of often overlapping and contradictory materials with little guidance for how to navigate through them for those who are not involved in the course in question.\n", "title": "" }, { "docid": "52bee454f3b72172298d221ad6905427", "text": "ity digital freedoms access knowledge universities should make all Academic work is not about profit. For most researchers the aim is to satisfy curiosity or to increase the sum of knowledge. Others are motivated by a desire to do good, or possibly for recognition. None of these things require there to be profit for the university.\n\nMoreover we should remember that the profit is not going to the individual who did the research, there is therefore no moral justification that the person has put effort in and so deserves to profit from it. The university does not even take the risk, which is born by the taxpayer who pays the majority of the research budget. Much of the profit from publishing this knowledge does not even go to the university. Instead academic publishers make huge profits through rentier capitalism. They have profit margins of 36% despite not doing the research, or taking any risk that goes into funding the research. [1]\n\n[1] Monbiot, George, “Academic publishers make Murdoch look like a socialist”, The Guardian, 29 August 2011, http://www.guardian.co.uk/commentisfree/2011/aug/29/academic-publishers-murdoch-socialist\n", "title": "" }, { "docid": "f09fafee9ff1ff245dc1d79d6d1c083e", "text": "ity digital freedoms access knowledge universities should make all This is trying to pull the wool over the eyes of those who fund the research in the first place; the taxpayer. The taxpayer (or in some cases private funder) pays for the research to be done and so is paying for the paper to be written. It then does not make sense that the taxpayer should pay again in order to access the research that they paid to have done in the first place. Yes there are small costs associated with checking and editing the articles but these could easily be added into research budgets especially as it would mean cutting out an extra cost that occurs due to the profit margins of the academic publishers. As Neelie Kroes, European Commission Vice-President for the Digital Agenda, says “Taxpayers should not have to pay twice for scientific research”. [1]\n\n[1] Kroes, Neelie, “Scientific data: open access to research results will boost Europe's innovation capacity”, Europa.eu, 17 July 2012. http://europa.eu/rapid/press-release_IP-12-790_en.htm?locale=en\n", "title": "" }, { "docid": "be781d6fad57e9e4baab0bc0d0238c22", "text": "ity digital freedoms access knowledge universities should make all The vast majority of people who go to University are not doing so simply because they are interested in a subject and want to find out more. Instead they are after the qualification and improved job prospects university provides. Even those few who are in large part studying out of curiosity and interest will likely be doing so at university because they like the student life and want the experience.\n\nHowever having courses and materials out in the open can even help universities with recruitment. Providing open access boosts a university’s reputation abroad which helps it in the international student market. Open access to academic work also helps give potential students a much better idea with what they will be studying which is very useful for students who are unsure where to choose. The benefits are obvious as shown by 35% of the Massachusetts Institute of Technology’s students choose the university after looking at its OpenCourseWare. [1]\n\n[1] Daniel, Sir John, and Killion, David, “Are open educational resources the key to global economic growth?”, Guardian Professional, 4 July 2012, http://www.guardian.co.uk/higher-education-network/blog/2012/jul/04/open-educational-resources-and-economic-growth\n", "title": "" }, { "docid": "493fba695eacceb2567565aee51d8cda", "text": "ity digital freedoms access knowledge universities should make all If business wants certain research to use for profit then it is free to do so. However it should entirely fund that research rather than relying on academic institutions to do the research and the government to come up with part of the funding. This would then allow the government to focus its funding on basic research, the kind of research that pushes forward the boundaries of knowledge which may have many applications but is not specifically designed with these in mind. This kind of curiosity driven research can be very important for example research into retroviruses gave the grounding that meant that antiretrovirals to control AIDS were available within a decade of the disease appearing. [1]\n\n[1] Chakradhar, Shraddha, “The Case for Curiosity”, Harvard Medical School, 10 August 2012, http://hms.harvard.edu/news/case-curiosity-8-10-12\n", "title": "" }, { "docid": "7dc97546372a1779c211de1379dae39f", "text": "ity digital freedoms access knowledge universities should make all Most universities are publically funded so should have to be open with their materials.\n\nThe United States University system is famously expensive and as a result it is probably the system in a developed country that has least public funding yet $346.8billion was spent, mostly by the states, on higher education in 2008-9. [1] In Europe almost 85% of universities funding came from government sources. [2] Considering the huge amounts of money spent on universities by taxpayers they should be able to demand access to the academic work those institutions produce.\n\nEven in countries where there are tuition fees that make up some of the funding for the university it is right that the public should have access to these materials as the tuition fees are being paid for the personal teaching time provided by the lecturers not for the academics’ publications. Moreover those who have paid for a university course would benefit by the materials still being available to access after they have finished university\n\n[1] Caplan, Bruan, “Correction: Total Government Spending on Higher Education”, Library of Economics and Liberty, 16 November 2012, http://econlog.econlib.org/archives/2012/11/correction_tota.html\n\n[2] Vught, F., et al., “Funding Higher Education: A View Across Europe”, Ben Jongbloed Center for Higher Education Policy Studies University of Twente, 2010. http://www.utwente.nl/mb/cheps/publications/Publications%202010/MODERN_Funding_Report.pdf\n", "title": "" }, { "docid": "1963eb08e8c9fa09f4b159239b0baed0", "text": "ity digital freedoms access knowledge universities should make all Openness benefits research and the economy\n\nOpen access can be immensely beneficial for research. It increases the speed of access to publications and opens research up to a wider audience. [1] Some of the most important research has been made much more accessible due to open access. The Human Genome Project would have been an immense success either way but it is doubtful that its economic impact of $796billion would have been realised without open access.\n\nThe rest of the economy benefits too. It has been estimated that switching to open access would generate £100million of economic activity in the United Kingdom as a result of reduced research costs for business and shorter development as a result of being able to access a much broader range of research. [2]\n\n[1] Anon., “Open access research advantages”, University of Leicester, http://www2.le.ac.uk/library/find/lra/openaccess/advantages\n\n[2] Carr, Dave, and Kiley, Robert, “Open access to science helps us all”, New Statesman, 13 April 2012. http://www.newstatesman.com/blogs/economics/2012/04/open-access-science-helps-us-all\n", "title": "" }, { "docid": "d3d5078ab584269e6432eb880de0647b", "text": "ity digital freedoms access knowledge universities should make all Opens up education\n\nHigher education, as with other levels of education, should be open to all. Universities are universally respected as the highest form of educational institution available and it is a matter of principle that everyone should have access to this higher level of education. Unfortunately not everyone in the world has this access usually because they cannot afford it, but it may also be because they are not academically inclined. This does not however mean that it is right to simply cut them off from higher educational opportunities. Should those who do not attend university not have access to the same resources as those who do?\n\nThis can have an even greater impact globally than within an individual country. 90% of the world’s population currently have no access to higher education. Providing access to all academic work gives them the opportunities that those in developed countries already have. [1]\n\n[1] Daniel, Sir John, and Killion, David, “Are open educational resources the key to global economic growth?”, Guardian Professional, 4 July 2012, http://www.guardian.co.uk/higher-education-network/blog/2012/jul/04/open-educational-resources-and-economic-growth\n", "title": "" }, { "docid": "2677118826a60d3d771794875a80e168", "text": "ity digital freedoms access knowledge universities should make all Making everything free to access will damage universities ability to tap private funding\n\nFor most universities even if the government is generous with funding it will still need for some projects require private funding. When providing money for research projects the government often requires cost sharing so the university needs to find other sources of funding. [1] Third parties however are unlikely to be willing to help provide funding for research if they know that all the results of that research will be made open to anyone and everyone. These businesses are funding specific research to solve a particular problem with the intention of profiting from the result. Even if universities themselves don’t want to profit from their research they cannot ignore the private funding as it is rapidly growing, up 250% in the U.S. from 1985-2005, while the government support is shrinking. [2]\n\n[1] Anon. (November 2010), “Research &amp; Sponsored Projects”, University of Michigan. http://orsp.umich.edu/funding/costsharing/cost_sharing_questions.html\n\n[2] Schindler, Adam, “Follow the Money Corporate funding of university research”, Berkley Science Review, Issue 13. http://sciencereview.berkeley.edu/articles/issue13/funding.pdf\n", "title": "" }, { "docid": "8bff7f2a3ca3b4359ec282f1f750d68b", "text": "ity digital freedoms access knowledge universities should make all Who will write and edit the work?\n\nYou can’t take the end result out of the system and assume all the rest of it will continue as usual. Journal articles don’t write themselves; there will still be costs for editors, typesetters, reviewing etc., as well as the time and cost of the writer. The average cost of publishing an article is about £4000. [1]\n\nThere have been two suggested forms of open access ‘Gold’ in which authors pay publishers article publication charges and ‘Green’ under which the author self-archives their papers in open access repositories. The gold option that the UK intends to implement could mean universities having to find an extra £60million a year. [2] In either case the cost is being put on the author.\n\nThis is exactly the same when asking academics to put their lectures, lecture notes, bibliographies etc online. They are being asked to put in more hours grappling with technology without being paid for it.\n\n[1] Moghaddam, Golnessa Galyani, “Why Are Scholarly Journals Costly even with Electronic Publishing?” http://eprints.rclis.org/14213/1/Why_are_scholarly_journals_costly_even_with_electronic_publishing_2009_ILDS_37__3_.pdf p.9\n\n[2] Ayris, Paul, “Why panning for gold may be detrimental to open access research”, Guardian Professional, 23 July 2012. http://www.guardian.co.uk/higher-education-network/blog/2012/jul/23/finch-report-open-access-research\n", "title": "" }, { "docid": "0fda9723f430cb7070996e69057e71fb", "text": "ity digital freedoms access knowledge universities should make all Universities deserve to profit from their work\n\nUniversities are providing a service just like almost any other business. They provide a service in terms of educating students who are enrolled with them and secondly they conduct research on a wide range of subjects. In both of these cases the university deserves to make a profit out of their work.\n\nWhen acting as an educator universities are in an educational free market, this is the case even when the cost is provided by the state. All universities are aiming to attract as many students as possible and earn as much as possible from fees. If the university is successful it will be able to charge more as it will attract students from further afield.\n\nWhile Universities may make a profit on research or even teaching this profit is for the benefit of society as a whole as the profits are usually simply reinvested in the University’s education and infrastructure. [1]\n\n[1] Anon. “What does the money get spent on?” The University of Sheffield, 2013. http://www.shef.ac.uk/finance/staff-information/howfinanceworks/higher_education/money_spent_on\n", "title": "" }, { "docid": "389db7ac8845897c9d6349c66dd482ec", "text": "ity digital freedoms access knowledge universities should make all Less incentive to study at university\n\nIf everything that University provides is open to all then there is less incentive to study at university. Anyone who is studying in order to learn about a subject rather than achieve a particular qualification will no longer need to attend the university in order to fulfil their aim. The actual benefit of university education is less in learning content per se than engaging with new ideas critically, something that is frequently more difficult in an online environment.\n\nMoreover if only some countries or institutions were to implement such open access then it makes more sense for any students who are intending to study internationally to go elsewhere as they will still be able to use the resources made available by that university. Open access if not implemented universally is therefore damaging to universities attempts to attract lucrative international students who often pay high tuition fees.\n", "title": "" } ]
arguana
a61235912427f0c141f149340e2b41ba
Creative commons prevents the incentive of profit The incentive of profit, rather than a creative productive drive, spurs the creation of new work. Without the guarantee of ownership over one’s work, the incentive to invest time and effort in its creation is significantly diminished. When the state is the only body willing to pay for the work and offers support only on these strict terms, there will be less interest in being involved with that work. Within a robust copyright system, individuals feel free to invest time in their pursuits because they have full knowledge that the fruits of their efforts will be theirs to reap. [1] If their work were to immediately leave their control, they would be less inclined to do so. The current copyright system that is built on profit encourages innovation and finding the best use for technology. Even when government has been the source of innovation those innovations have only become widespread when someone is able to make a profit from it; the internet became big when profit making companies began opening it up. If the government wants partnerships with businesses, or universities that are not directly linked to government then it has to accept that those partners can make a profit. Furthermore, the inability of others to simply duplicate existing works as their own means they too will be galvanized to break ground on new ideas, rather than simply re-tread over current ideas and to cannibalize the fecund ground of creative commons works. [1] Greenberg, M. ‘Reason or Madness: A Defense of Copyright’s Growing Pains’. John Marshall Review of Intellectual Property Law. 2007. http://www.jmripl.com/Publications/Vol7/Issue1/Greenberg.pdf
[ { "docid": "1f285914411a2b4ae5d5b85e8082bb22", "text": "p ip digital freedoms access knowledge house would not fund any work not The government should not be interested in the profit motive but what is best for its citizens which will usually mean creative commons licenses rather than the state making a profit. This is even more likely when developments are a joint project with a for profit operation; taxpayers will rightly ask why they should be paying the research costs only for a private business to reap the profit from that investment. The government already provides a leg up to businesses in the form of providing infrastructure, a stable business environment, education etc., it should not be paying for their R&amp;D too.\n", "title": "" } ]
[ { "docid": "1063206cbb14200b5b8cb80beccf9c28", "text": "p ip digital freedoms access knowledge house would not fund any work not Government is quite simply not ‘like everyone else’. If government acted like a profit maximising business it would clearly have the ability to turn itself into a monopoly on almost everything. This is why the role of government is not to make a profit but to ensure the welfare and freedoms of its citizens.\n", "title": "" }, { "docid": "e68b2160b20745b0d1b5463d333bdb74", "text": "p ip digital freedoms access knowledge house would not fund any work not While there will be a few cases where it is undesirable that things that the government pays the funding for to be licensed through creative commons this should not stop creative commons from being the default choice. Creative commons is a good choice for the vast majority of what government does as weapons systems and other security related items are only a small part of government investment. Think of all the IT systems for government departments, it clearly makes sense that they should be creative commons so that they can be improved and adapted when it turns out they don’t work in quite the way they were designed. For example the UK government wasted £2,7billion on an IT project for the NHS, [1] in such a situation it would have made a lot of sense to have what was done open to others to pick up on and build upon if there was any of the software that could be of any use.\n\n[1] Wright, Oliver, ‘NHS pulls the plug on its £11bn IT system’, The Independent, 3 August 2011, http://www.independent.co.uk/life-style/health-and-families/health-news/nhs-pulls-the-plug-on-its-11bn-it-system-2330906.html\n", "title": "" }, { "docid": "d54a8453884d140759ad97a784801b33", "text": "p ip digital freedoms access knowledge house would not fund any work not Is it really in the public interest that there should be a norm that government information should be shared? There are clearly some areas where we do not want our government to share information; most clearly in the realm of security, [1] but also where the government and through them taxpayers can make a profit out of the product that the government has created. If the government creates a new radar system for the navy does it not make sense that they should be able to sell it at a profit for use by other country’s shipping? Also, the abundance of piracy online is not a reason to submit to the pirates and give them free access to information they should not receive.\n\n[1] See ‘ This House believes transparency is necessary for security ’\n", "title": "" }, { "docid": "603cc1a3ed7c24a35d4b7489c6196dfb", "text": "p ip digital freedoms access knowledge house would not fund any work not The choice to release work into the viral market is a business decision creators should have the power to choose, not a mandated requirement for funding. Some may decide that they will profit and gain more recognition through releasing their work into the creative commons, others may not. It should be remembered that Ordinance Survey was originally mapping for military purposes rather than for the general public so it might very well have decided that there is no reason to have its data open to the public and it would pose no benefit to enable to public to use that data for modification.\n", "title": "" }, { "docid": "6b9595cbbe1f721694a6bd9371e486ca", "text": "p ip digital freedoms access knowledge house would not fund any work not There is a difference between the general public and the government. It is the government that bought the rights to the work not the people even if the people are the ones that originally provided the money to develop the work by paying their taxes. It can be considered to be analogous to a business. Consumers pay for the products they buy and the profits from this enable the business to make the next generation of products. But that the consumers provided the profit that enabled that development does not enable the consumers to either get an upgrade or for the product to be released with a creative commons license\n", "title": "" }, { "docid": "5635854c5a5a5f098f1cce0ff601c7a7", "text": "p ip digital freedoms access knowledge house would not fund any work not Creative commons is not a good option for many government works\n\nIt is simply wrong to paint all government funding with one brush decreeing that it should only be spent if the results are going to be made available through creative commons. Governments fund a vast diversity of projects that could be subject to licensing and the pragmatic approach would be for the government to use whatever license is most suitable to the work at hand. For funding for art, or for public facing software creative commons licences may well be the best option. For software with strong commercial possibilities there may be good financial reasons to keep the work in copyright, there have been many successful commercial products that have started life being developed with government money, the internet being the most famous (though of course this is something for which the government never made much money and anyway the patent would run out before it became big). [1] With many military or intelligence related software, or studies, there may want to be a tough layer of secrecy preventing even selling the work in question, we clearly would not want to have creative commons licensing for the software for anything to do with nuclear weapons. [2]\n\n[1] Manjoo, Farhad, ‘Obama Was Right: The Government Invented the Internet’, Slate, 24 July 2012, http://www.slate.com/articles/technology/technology/2012/07/who_invented_the_internet_the_outrageous_conservative_claim_that_every_tech_innovation_came_from_private_enterprise_.html\n\n[2] It should however be noted that many governments do sell hardware and software that might be considered militarily sensitive. See ‘ This House would ban the sale of surveillance technology to non-democratic countries ’\n", "title": "" }, { "docid": "d13a36c3449843b2c0a08d3669a72821", "text": "p ip digital freedoms access knowledge house would not fund any work not Government, like everyone else, should be able to profit from its work, that profit benefits its citizens rather than harming them\n\nWe generally accept the principle that people who create something deserve to benefit from that act of creation as they should own that work. [1] This is a principle that can be applied as easily to government, whether through works they are funding or works they are directly engaged in, as to anyone else. The owners of the work deserve to have the choice to benefit from their own endeavours through having copyright over that work. Sometimes this will mean the copyright will remain with the person who was paid to do the work but most of the time this will mean government ownership. Public funding does not change this fundamental ownership and the quixotic bargain state funding in exchange for mandatory creative commons licensing is a perversion of that ownership.\n\nThe Texas Emerging Technology Fund is an example of the use of state funding in the private sector to produce socially useful technologies without thieving the ownership of new technologies from their creators. [2] Moreover states clearly benefit from being able to use any profit from their funding. It would clearly be in taxpayers interest if the state is able to make a profit out of the investments that taxpayers funding creates as this would mean taxes could be lower.\n\n[1] Greenberg, M. ‘Reason or Madness: A Defense of Copyright’s Growing Pains’. John Marshall Review of Intellectual Property Law. 2007. http://www.jmripl.com/Publications/Vol7/Issue1/Greenberg.pdf\n\n[2] Office of the Governor. ‘Texas Emerging Technology Fund’. 2012, http://governor.state.tx.us/ecodev/etf/\n", "title": "" }, { "docid": "3baa4a5430b4a3c7902102f5f21670b2", "text": "p ip digital freedoms access knowledge house would not fund any work not The default of copyright restricts the spreading of information\n\nCurrent copyright law assigns too many rights, automatically, to the creator. Law gives the generator a work full copyright protection that is extremely restrictive of that works reuse, except when strictly agreed in contracts and agreements. Making the Creative Commons license the standard for publicly-funded works generates a powerful normalizing force toward a general alteration of people’s defaults on what copyright and creator protections should actually be like. The creative commons license guarantees attribution to the creator and they retain the power to set up other for-profit deals with distributors, something that is particularly useful for building programs that need to be maintained. [1] At base the default setting of somehow having absolute control means creators of work often do not even consider the reuse by others in the commons. The result is creation and then stagnation, as others do not expend the time and energy to seek special permissions from the creator.\n\nBy normalizing the creative commons through the state funding system, more people will be willing to accept the creative commons as their private default. This means greater access to more works, for the enrichment of all. The result is that a norm is created whereby the assumption is that information should be open and shared rather than controlled and owned for profit by an individual or corporation. All governments recognise a right to freedom of information as part of freedom of expression making it the government’s responsibility to provide access to public information [2] and many are enabling this through creating freedom of information acts. [3] This is simply another part of that right.\n\n[1] ‘About The Licenses’, Creative Commons, 2010, http://creativecommons.org/licenses/\n\n[2] ‘Access to public information is government’s responsibility, concludes seminar in Montevideo’, United Nations Educational, Scientific and Cultural Organization, 8 October 2010, http://portal.unesco.org/ci/en/ev.php-URL_ID=30887&amp;URL_DO=DO_TOPIC&amp;URL_SECTION=201.html\n\n[3] See ‘ This House believes that there should be a presumption in favour of publication for information held by public bodies ’\n", "title": "" }, { "docid": "1f3472bc00c9f2c2a2e525cbda47fd6c", "text": "p ip digital freedoms access knowledge house would not fund any work not Creative commons allows existing work to be used as a building block by others\n\nThe nature of the internet and mass media is such that many creators can benefit from the freedom and flexibility that creative commons licenses furnish to them. Creative commons provides vast benefits in allowing a creation to have life after its funding has run out or beyond its original specifications. Creative commons means that the original work can be considered to be a building block that can simply be used as a foundation for more applications and modifications.\n\nFor example in many countries government has for decades produced official maps for the country but these can only be irregularly updated – often with a new release of a paper map. However the internet means that maps could easily be regularly updated online by enthusiastic users and volunteers as things change on the ground if those maps were available under creative commons. This is why applications like openstreetmap or google maps (which is not creative commons but can be easily built upon by creative commons projects) are now much more successful than traditional mapping and has often forced government map providers to follow suit such as the UK’s Ordnance Survey making many of its maps free and downloadable. [1] It is important to recollect that those operating under a creative commons license still maintain control of the marketable aspects of their work and can enter into deals for the commercial distribution of their works. [2]\n\n[1] Arthur, Charles, ‘Ordnance Survey launches free downloadable maps’, The Guardian, 1 April 2010, http://www.guardian.co.uk/technology/2010/apr/01/ordnance-survey-maps-download-free\n\n[2] ‘About The Licenses’, Creative Commons, 2010, http://creativecommons.org/licenses/\n", "title": "" }, { "docid": "aa1f222a6a52a38c4bb435f4cae6f5d1", "text": "p ip digital freedoms access knowledge house would not fund any work not If the public funds a product it belongs to them\n\nEveryone benefits and is enriched by open access to resources that the government can provide. A work is the province of its creator in most respects, since it is from the mind and hand of its creator that it is born. But when the state opts to fund a project, it too becomes a part-owner of the ideas and creation that springs forth. The state should thus seek to make public the work it spends taxpayer money to create. This is in exactly the same way that when an employee of a company creates something presuming there is the correct contract the rights to that work go to the company not the employee. [1]\n\nThe best means for doing this is through mandating that work created with state funding be released under creative commons licenses, which allow the work to be redistributed, re-explored, and to be used as springboards for new, derivative works. This is hampered by either the creator, or the government, retaining stricter forms of copyright, which effectively entitles the holder of the copyright to full control of the work that would not exist had it not been for the largesse of society. If state funded work is to have meaning it must be in the public sphere and reusable by the public in whatever form they wish. Simply put taxpayer bought so they own it.\n\n[1] Harper, Georgia K., ‘Who owns what?’, Copyright Crash Course, 2007, http://copyright.lib.utexas.edu/whoowns.html\n", "title": "" } ]
arguana
517aee8ef6cd8e1285bbf026d190b45d
Limiting ability of oppressed individuals to seek out help and community. Anonymous posting means people who are made to feel ashamed of themselves, or their identities within their local communities can seek out help and/or like-minded people. For example, a gay teenager in a fiercely homophobic community could find cyber communities that are considerably more tolerant, and even face the same issues as them. This can make an enormous difference to self-acceptance, as people are no longer subjected to a singular, negative view of themselves. [1] Banning anonymous posting removes this ability. [1] ‘In the Middle East, Marginalized LGBT Youth Find Supportive Communities Online’ Tech President. URL: http://techpresident.com/news/wegov/22823/middle-east-marginalized-lgbt-youth-find-supportive-communities-online ‘Online Identity: Is authenticity or anonymity more important?’ The Guardian. URL: http://www.guardian.co.uk/technology/2012/apr/19/online-identity-authenticity-anonymity
[ { "docid": "b7ee9fe89ed06256e4e48f2ca5c2d303", "text": "p ip internet digital freedoms privacy house would ban all anonymous Small reduction in ability to seek out help and community outweighed by a large reduction in hate speech. Anonymity is not essential to seeking out help and community. The internet is a large and expansive place, meaning that if an individual posts on an obscure site, people that they know in real life are very likely to see it. Even having your real name attached is unlikely to single you out unless you have a particularly distinctive name. Anonymity adds very little to their ability to seek out this help and community.\n\nAdditionally, anonymity is frequently used as a tool to spread hate speech, [1] which the people this point is concerned with are the primary victims of. Even if a lack of anonymity means a marginal reduction in their ability to seek out a supportive community, this is a worthwhile sacrifice for a significant reduction in the amount of hatred directed at them.\n\n[1] ‘Starting Points for Combating Hate Speech Online’. British Institute of Human Rights. URL: http://act4hre.coe.int/rus/content/download/28301/215409/file/Starting%20points.pdf\n", "title": "" } ]
[ { "docid": "c7091d47aeda262b8904eaeb2c93bf56", "text": "p ip internet digital freedoms privacy house would ban all anonymous Freedom from consequences is not a necessary component of freedom of speech. If someone is free from legal restraints surrounding their ability to speak, they are free to speak. Freedom of speech does not entitle an individual to absolute freedom of consequences of any kind, including social consequences to their speech. While someone should certainly be free to state their opinion, there is no reason why they should be entitled to not be challenged for holding that opinion.\n", "title": "" }, { "docid": "b1ba9e64bc3bad50395e126a6001a9b7", "text": "p ip internet digital freedoms privacy house would ban all anonymous Self-improvement through an alias or false identity is unlikely to lead to genuine self-improvement. When individuals have multiple identities, they may think of them as distinct from one another, and are thus unlikely to transfer self-improvement from one to another. For example, a recovering addict may only have a renewed attitude in their online identity, and not in real life where it is more important. This is unlikely to be beneficial, and may be actively harmful in terms of limiting the improvement of real life identities.\n", "title": "" }, { "docid": "e19d797b6d7dc8654ddc7779d3edb26e", "text": "p ip internet digital freedoms privacy house would ban all anonymous Protest of this kind is less meaningful. When an organisation such as this is criticised only by anonymous individuals, who are likely to be difficult to contact or learn more about, it is less likely to lead to any kind of long-term meaningful resistance. In the case of Anonymous and the Church of Scientology, there have been no notable acts of resistance to the Church of Scientology other than Anonymous.\n\nAnonymous resistance makes other kinds of resistance less likely to happen, and rarely leads to significant change or action.\n", "title": "" }, { "docid": "d85fa04c18b9a0cdd5d8e5dcf405846d", "text": "p ip internet digital freedoms privacy house would ban all anonymous Hate speech will happen regardless. A significant amount of online hate speech is made through accounts under the real life name of the speaker. It is notable that Facebook has required its users to use their real names since 2011, [1] but has still had significant issues with hate speech long after that. [2] The fact is that an enormous amount of hate speakers see what they are saying as entirely legitimate, and are therefore not afraid of having it connected to their real life identities. The fact is that 'hate speech' is localised and culture-dependent. Since the Internet brings many cultures together, hate speech will happen almost inadvertently.\n\nAdditionally, online hate speech is very difficult to prosecute even when connected to real life identities, [3] so this policy is unlikely to be effective at making those who now would be identified see any more consequences than before. In the Korean example the law was simply avoided by resorting to foreign sites. [4] The similar lack of consequences is likely to lead to a similar lack of disincentive to posting that kind of material.\n\n[1] ‘Twitter rife with hate speech, terror activity’. Jewish Journal. URL: http://www.jewishjournal.com/lifestyle/article/twitter_rife_with_hate_speech_terror_activity\n\n[2] ‘Facebook Admits It Failed On Hate Speech Following #FBrape Twitter Campaign And Advertiser Boycott’. International Business Times. URL: http://www.ibtimes.com/facebook-admits-it-failed-hate-speech-following-fbrape-twitter-campaign-advertiser-boycott-1282815\n\n[3] ‘Racists, Bigots and the Internet’. Anti-Defamation League. URL: http://archive.adl.org/internet/internet_law3.asp\n\n[4] ‘Law on real name use on Internet ruled illegal’, JoonAng Daily, http://koreajoongangdaily.joinsmsn.com/news/article/article.aspx?aid=295...\n", "title": "" }, { "docid": "30e7bf8f2af585091e30064c6aa96586", "text": "p ip internet digital freedoms privacy house would ban all anonymous Similar prevention can be achieved through raising internet awareness. In the case of children, parents taking a more pro-active role in monitoring and controlling their children’s online activities is likely to be more effective than the measures of this policy. Indeed, signalling that they do need to monitor their children can actually put their children in more danger, as there are considerable risks to children online even without anonymous posting.\n\nOther kinds of fraud can be similarly avoided by raising awareness: people should be made to realise that sending money or bank details to people you don’t know is a bad idea. In fact, the removal of internet aliases may even encourage people to trust people they don’t know, but do know the real names of, even though that is no more advisable.\n", "title": "" }, { "docid": "ac56b96b3c33f78eb9a21b8b4a53ecbc", "text": "p ip internet digital freedoms privacy house would ban all anonymous Moves illegal activity in harder to monitor areas. Those partaking in planning illegal activity will not continue to do so if hiding their identities is not possible. Instead, they will return to using more private means of communication, such as meeting in person, or using any online services that do guarantee anonymity such as TOR. While this may make planning illegal activity more difficult, it also makes it more difficult for law enforcement officials to monitor this behaviour, and come anywhere near stopping it: at least under the status quo they have some idea of where and how it is happening, and can use that as a starting point. Forcing criminals further underground may not be desirable. The authorities in cooperation with websites are usually able to find out who users are despite the veil of anonymity for example in the UK the police have arrested people for rape threats made against a campaigner for there to be a woman on UK banknotes.1\n\n1 Masters, Sam, 'Twitter threats: Man arrested over rape-threat tweets against campaigner Caroline Criado-Perez', The Independent, 28, July, 2013, http://www.independent.co.uk/news/uk/crime/twitter-threats-man-arrested-...\n", "title": "" }, { "docid": "79678eb50153611bf9bcf969be935e87", "text": "p ip internet digital freedoms privacy house would ban all anonymous Stopping anonymity does not meaningfully prevent bullying. Internet anonymity is not essentially to bullying: it can be done through a nearly infinite number of media. Importantly, it is not even essential to anonymous bullying. For example, it is quite simple to send anonymous text messages: all that is required is access to a phone that the victim does not have the number of. It is similarly easy to simply write notes or letters, and leave them in places where the victim will find them. Anonymous posting on the internet is far from the only place where these kinds of anonymous attacks are possible.\n\nAll this policy does is shifts the bullying into areas where they may be more difficult to monitor. Rather than sending messages online that can be, albeit with some difficulty, traced back to the perpetrator, or at least used as some kind of evidence, bullies are likely to return to covert classroom bullying that can be much more difficult to identify.\n", "title": "" }, { "docid": "568de474ce4eafb764eeaaaef2ad8001", "text": "p ip internet digital freedoms privacy house would ban all anonymous Limiting ability to experiment with identity.\n\nThe ability to post anonymously on the internet means that people can create a new identity for themselves where they will not be judged in terms of what they have done before. This can be particularly useful for people who are attempting to make significant positive reformations to their lives, such as recovering addicts, thereby facilitating self-improvement. Banning anonymous posting reduces individual’s abilities to better themselves in this way. [1]\n\n[1] ‘Online Identity: Is authenticity or anonymity more important?’ The Guardian. URL: http://www.guardian.co.uk/technology/2012/apr/19/online-identity-authenticity-anonymity\n", "title": "" }, { "docid": "b197e08d9fc0a3fbc8eeca3ef27b2f5e", "text": "p ip internet digital freedoms privacy house would ban all anonymous Damaging to freedom of speech.\n\nPeople are only truly free to say what they wish when they do not have to worry about being personally persecuted, either by peers, strangers, or their government, for what they are saying. [1] Removing the right to post anonymously increases the pressures people feel to post in a particular way, and thus limits the extent to which they can speak freely.\n\n[1] ‘Anonymity’. Electric Frontier Foundation. URL: https://www.eff.org/issues/anonymity\n", "title": "" }, { "docid": "cdcceffdcaae0e851f3909c1f66d1cda", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing the extent to which large and powerful organisations can be criticised.\n\nOrganisations with lots of wealth and legal power can be difficult to criticise when one’s name and personal information is attached to all attempts at protest and/or criticism. Internet anonymity means that individuals can criticise these groups without fear of unfair reprisal, and their actions are, as a result, held up to higher levels of scrutiny. For example, internet anonymity were instrumental in the first meaningful and damaging protests against the Church of Scientology by internet group Anonymous. [1] Similarly anonymity has been essential in the model for WikiLeaks and other similar efforts like the New Yorker’s Strongbox. [2]\n\n[1] ‘John Sweeney: Why Church of Scientology’s greatest threat is ‘net’. The Register. URL: http://www.theregister.co.uk/2013/02/21/scientology_internet_threat/\n\n‘Anonymous vs. Scientology’. Ex-Scientology Kids. URL: http://exscientologykids.com/anonymous/\n\n[2] Davidson, Amy, ‘Introducing Strongbox’, The New Yorker, 15 May 2013, http://www.newyorker.com/online/blogs/closeread/2013/05/introducing-strongbox-anonymous-document-sharing-tool.html\n", "title": "" }, { "docid": "05060d7fe24cd1579f72fed6f764c25f", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing hate speech.\n\nOpenly racist, sexist, or otherwise discriminatory comments made through public forums are much more likely when made anonymously, as people feel they are unlikely to see any consequences for voicing their hateful opinions. [1] This leads firstly to a propagation of these views in others, and a higher likelihood of attacks based on this hate, as seeing a particular view more often makes people feel it is more legitimate. [2] More importantly, it causes people from the targeted groups to feel alienated or unwelcome in particular places due to facets of their identity that are out of their control, and all people have a right not to be discriminated against for reasons such as these.\n\nThe proposed policy would enormously reduce the amount of online hate speech posted as people would be too afraid to do it. Although not exactly the same a study of abusive and slanderous posts on Korean forums in the six months following the introduction of their ban on anonymity found that such abusive postings dropped 20%. [3] Additionally it would allow governments to pursue that which is posted under the same laws that all other speech is subject to in their country.\n\n[1] ‘Starting Points for Combating Hate Speech Online’. British Institute of Human Rights. URL: http://act4hre.coe.int/rus/content/download/28301/215409/file/Starting%20points.pdf\n\n[2] ‘John Gorenfield, Moon the Messiah, and the Media Echo Chamber’. Daily Kos. URL: http://www.dailykos.com/story/2004/06/24/34812/-John-Gorenfeld-Moon-the-Messiah-and-the-Media-Echo-Chamber\n\n[3] ‘Real Name Verification Law on the Internet: A Poison or Cure for Privacy?’, Carnegie Melon University, http://weis2011.econinfosec.org/papers/Real%20Name%20Verification%20Law%20on%20the%20Internet%20-%20A%20Poison%20or%20Cu.pdf\n", "title": "" }, { "docid": "2bbd5671f39c2e21f2120dc86f1915fc", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing currently illegal activity.\n\nInternet anonymity is very useful for planning and organising illegal activity, mostly buying and selling illegal goods, such as drugs, firearms, stolen goods, or child pornography, but also, in more extreme cases, for terrorism or assassinations. This is because it can be useful in making plans and advertisements public, thus enabling wider recruitment and assistance, while at the same time preventing these plans from being easily traced back to specific individuals. [1] For example, the website Silk Road openly offers users the opportunity to buy and sell illegal drugs. Sales on this site alone have double over the course of six months, hitting $1.7million per month. [2]\n\nThis policy makes it easier for the police to track down the people responsible for these public messages, should they continue. If anonymity is still used, it will be significantly easier to put legal pressure on the website and its users, possibly even denying access to it. If anonymity is not used, obviously it is very easy to trace illegal activity back to perpetrators. In the more likely event that they do not continue, it at least makes organising criminal activities considerably more difficult, and less likely to happen. This means the rule of law will be better upheld, and citizens will be kept safer. [3]\n\n[1] Williams, Phil, ‘Organized Crime and Cyber-Crime: Implications for Business’, CERT, 2002, http://www.cert.org/archive/pdf/cybercrime-business.pdf ‎ p.2\n\n[2] ‘Silk Road: the online drug marketplace that officials seem powerless to stop.’ The Guardian. URL: http://www.guardian.co.uk/world/2013/mar/22/silk-road-online-drug-marketplace\n\n[3] ‘Do dark networks aid cyberthieves and abusers?’ BBC News. URL: http://www.bbc.co.uk/news/technology-22754061\n", "title": "" }, { "docid": "8f2722ac2188990dd780dc209a44c128", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing cyberbullying.\n\nWhen internet anonymity is used for bullying, it can make the situation much worse. Firstly, perpetrators are much less likely to hold back or be cautious as they are less concerned with the possibility of being caught. This means the bullying is likely to be more intense than when it is done in real life. [1] Additionally, for victims of cyberbullying, being unable to tell who your harasser is, or even how many there are can be particularly distressing. [2]\n\nAnonymous posting being significantly less available takes away the particularly damaging anonymous potential of cyberbullying, and allows cyberbullying to be more effectively dealt with.\n\n[1] ‘Traditional Bullying v. Cyberbullying’. CyberBullying, Google Sites. URL: https://sites.google.com/site/cyberbullyingawareness/traditional-bullying-vs-cyberbullying\n\n‘The Problem of Cyberbullies’ Anonymity’. Leo Burke Academy. URL: http://www.lba.k12.nf.ca/cyberbullying/anonymity.htm\n\n[2] ‘Cyberbullying’. Netsafe. URL: http://www.cyberbullying.org.nz/teachers/\n", "title": "" }, { "docid": "c57b893a1f887bb3879f32cd0acb0da6", "text": "p ip internet digital freedoms privacy house would ban all anonymous Reducing fraud using fake identities.\n\nAnonymous posting can be used to make people believe you are someone who you are not. This can be done in order to acquire money from victims either by establishing a dishonest relationship or offering fraudulent business opportunities. [1] It is also a frequently used tool in child abduction cases, where the perpetrator will pretend to be a child or even classmate to gain enough access to a child in order to make abduction viable. It is estimated that nearly 90% of all sexual solicitations of youth are made in online anonymous chat rooms. Additionally, in the UK alone over 200 cases of meeting a child following online grooming, usually via anonymous sites are recorded. [2]\n\nThese are enormous harms that can be easily avoided with the removal of anonymous posting online.\n\n[1] ‘Online Fraud’. Action Fraud. URL: http://www.actionfraud.police.uk/fraud-az-online-fraud\n\n[2] ‘Online child grooming: a literature review on the misuse of social networking sites for grooming children for sexual offences’. Australian Institute of Criminology. URL: http://www.aic.gov.au/documents/3/C/1/%7B3C162CF7-94B1-4203-8C57-79F827168DD8%7Drpp103.pdf\n", "title": "" } ]
arguana
588e5667f3b568ff8fa4aca0e2fb733e
The freedom of Holocaust deniers to use to the internet legitimizes their organization and message in eyes of consumers When the internet places no moral judgments on content, and the gatekeepers let all information through on equal footing, it lends an air of legitimacy that these beliefs have a voice, and that they are held by reasonable people. This legitimacy is enhanced by the anonymity of the internet where deniers can pose as experts and downplay their opponents’ credentials. While the internet is a wonderful tool for spreading knowledge, it can also be subverted to disseminate misinformation. Holocaust deniers have been able to use the internet to a remarkable extent in promoting pseudoscience and pseudo-history that have the surface appearance of credibility. [1] Compounding this further, the administrators of these sites are able to choke of things like dissenting commenters, giving the illusion that their view is difficult, or even impossible to reasonably challenge. They thus create an echo chamber for their ideas that allows them to spread and to affect people, particularly young people susceptible to such manipulation. By denying these people a platform on the internet, the government is able to not only make a moral stance that is unequivocal, but also to choke off access to new members who can be saved by never seeing the negative messages. [1] Lipstadt, Deborah. Denying the Holocaust: The Growing Assault on Truth and Memory. New York: Free Press, 1993.
[ { "docid": "954eba6f4edcd285062eeea494689bea", "text": "freedom expression house would block access websites deny holocaust While it is true that Holocaust deniers spread misinformation and seek to undermine and bend the systems of discourse to be as favorable as possible, they are a tiny fringe minority of opinion, and the number of sites debunking their pseudo-history is far greater than that of the actual deniers. Even young people are able to surf the web with great skill, and can easily see that the Holocaust denial position is fringe in the extreme.\n", "title": "" } ]
[ { "docid": "94fc988a2659eee65cfb8375aac21777", "text": "freedom expression house would block access websites deny holocaust The internet is a flourishing place for discourse because it is absolutely free to all, and everyone accepts and experiences the fruit of that freedom. When the government abandons its stance of neutrality and begins censoring materials, even if it begins only with the nastiest examples, it compromises the copper-fastened liberties that the internet was created to furnish. Many people will abuse that tool, but thankfully people can evade the hate sites easily and never have to experience them without compromising their own freedoms by censoring their opponents.\n", "title": "" }, { "docid": "8b5469b2a5012b34bb1b48d33184335b", "text": "freedom expression house would block access websites deny holocaust Holocaust deniers will always find ways to organize, be it in smaller pockets of face-to-face contact, clandestine social networking, or untraceable black sites online that governments cannot shut down because they cannot find them. The result of blocking these views from the public internet only serves to push their proponents further underground and to make them take less public strategies on board. Ultimately, it is a cosmetic, not substantive solution.\n", "title": "" }, { "docid": "70ce8eea7772645482e107bd7359e6f0", "text": "freedom expression house would block access websites deny holocaust Denying Holocaust-denier their right to speak is a threat to everyone’s freedom of speech. It is essential in a free society that people be able to express their views without fear of reprisal. As Voltaire said, “I disapprove of what you say, but I will defend to the death your right to say it”. As the facts are against the Holocaust deniers their opponents should have no fear of engaging them in open discussion as they will be able to demonstrate how erroneous their opponents are.\n", "title": "" }, { "docid": "849c3206fd128a5210115ee7037ef611", "text": "freedom expression house would block access websites deny holocaust Freedom of speech certainly may be curtailed when there is a real harm manifested from it. Holocaust denial, in its refusal to acknowledge one of the most barbaric acts in human history and attempt to justify terrible crimes, is an incredibly dehumanizing force, one that many people suffer from, even if they do not need to read it themselves. We may have the freedom to express ourselves but that does not mean we have the freedom to make up our own facts. The threat Holocaust deniers represent to free society demands that their right to speech online be curtailed.\n", "title": "" }, { "docid": "6332819988487b4394104066bbb1c556", "text": "freedom expression house would block access websites deny holocaust Forcing Holocaust deniers out of the spotlight and underground can only serve the cause of justice. Surveillance efforts can be employed more rigorously if need be, and will be considerably more legitimate to employ against these groups when their actions are acknowledged to be illegal. With them out of the spotlight they are less likely to rope in new recruits among casual, open-minded internet-goers.\n", "title": "" }, { "docid": "9b19713b8311a3140b4af4f17bb69f3a", "text": "freedom expression house would block access websites deny holocaust While some people might be enticed by the mystique of Holocaust deniers as transgressors, far more people will be put off by the firm hand of the state denying them a powerful platform from which to speak. Even if some are enticed these individuals will find it much more difficult to access the information they seek and so only the most determined will ultimately be influenced. Some Holocaust deniers will always lurk in the shadows, but society should show no quarter in the battle for truth.\n", "title": "" }, { "docid": "993c42e697744009cbdac3fcdff17ccc", "text": "freedom expression house would block access websites deny holocaust Taking a neutral stance is a tacit endorsement of the validity of the message being spread as being worthy of discussion. Holocaust denial does not deserve its day in the sun, even if the outcome were a thumping victory for reason and truth. Besides, the Holocaust deniers are not convinced by reason or argument. Their beliefs are impervious to facts, which is why debate is a pointless exercise except to give them a platform by which to spread their message, organize, and legitimize themselves in the marketplace of ideas.\n", "title": "" }, { "docid": "2e9eb217fb29343fce0a7e603f4e1b1a", "text": "freedom expression house would block access websites deny holocaust Holocaust denial sites are an attack on group identities\n\nThe internet is the center of discourse and public life in the 21st century. With the advent of social networks, people around the world live more and more online. Unlike any other kind of hateful speech that might flourish on the internet, Holocaust denial stands apart. This is due firstly to the particular mark that the Holocaust has made on the collective consciousness of western civilization as the ultimate act of human evil and depravity. The Holocaust is now a defining part of Jewish identity, denying it attacks all those who suffered and their decedents. Allowing Holocaust denial websites is allowing the rejection of groups’ very identity. Thus its apologists do far more harm than any troll, misogynist, or even apologist of other atrocities. For this reason, the government can justifiably censor sites promoting these absolutely offensive beliefs while not falling down any sort of slippery slope. The second reason Holocaust denial stands apart from other sorts of internet abuse is that these sites are often flashpoints for violence materializing in the real world. More than just talk, neo-Nazis seek dangerous action, and thus the state should be doubly ready to remove this threat from the internet. [1] Accepting that Holocaust deniers have a point that should be articulated across the internet would be helping these neo-nazi groups gain a foothold. The particularly grievous nature of the Holocaust demands the protection of history to the utmost.\n\n[1] BBC. “Germany’s Neo-Nazi Underground”. BBC News. 7 December 2011, http://www.bbc.co.uk/news/world-europe-16056399\n", "title": "" }, { "docid": "c699c1d32dbf1653b62471775a9896a9", "text": "freedom expression house would block access websites deny holocaust Governments should not allow forums for hate speech to flourish\n\nDenial of the Holocaust is fundamentally hate speech. It is the duty of the government to deny these offensive beliefs a platform of any kind. [1] By blocking these sites, the government denies a certain freedom of speech, but it is a necessarily harmful form of speech that has no value in the market place of ideas. Many people, often Jews, but also members of other discriminated against minorities like Roma, suffer directly from the speech, feeling not only offended, but physically threatened by such denials. Holocaust denial however goes beyond hate speech because it is not only offensive but factually wrong. The attempt to rewrite history and to sow lies causes a threat to the truth and an ability to co-opt the participation of gullible individuals to their cause that mere insults and demagoguery could not. It represents a threat to education by undermining the value of facts and evidence. For this reason, there is essentially no real loss of valuable speech in censoring the sites denying the Holocaust.\n\n[1] Lipstadt, Deborah. Denying the Holocaust: The Growing Assault on Truth and Memory. New York: Free Press, 1993.\n", "title": "" }, { "docid": "19183b18a4026e97e324f7ba3c63a209", "text": "freedom expression house would block access websites deny holocaust A ban would stop Holocaust deniers from engaging in effective real world actions\n\nThe greatest fear with hate groups is not just their hateful rhetoric online, but also their ability to take harmful action in the real world. When Holocaust deniers are able to set up standard websites, they have the ability to mobilize action on the ground. This means coordinating rallies, as well as acts of hooliganism and violence. One need only look at the sort of organization the Golden Dawn, a neo-fascist Greek party, has been able to develop in part through active use of social media and websites. [1] By capitalizing on the tools of the 21st century these thugs have succeeded in bringing sympathizers to their cause, often geographically diffuse, into a tight-knit community capable of action and disruption that harms all citizens, but particularly the minority groups they are presently fixated upon. By utilizing social media and websites Holocaust deniers have gained a new lease on life. The government can significantly hamper these organizations from taking meaningful actions, and from coalescing in the first place by denying them their favored and most effective platform.\n\n[1] Savaricas, Nathalie, “Greece’s neo-fascists are on the rise... and now they’re going into schools: How Golden Dawn is nurturing the next generation”, The Independent, 2 February 2013, http://www.independent.co.uk/news/world/europe/greeces-neofascists-are-on-the-rise-and-now-theyre-going-into-schools-how-golden-dawn-is-nurturing-the-next-generation-8477997.html\n", "title": "" }, { "docid": "d1e682423c09879a0b27a8087ceab030", "text": "freedom expression house would block access websites deny holocaust The internet should operate on the basis of net neutrality\n\nThe internet is a free market of ideas in which all beliefs can be submitted to the whole of the online community and then put to criticism and judgment. In the same way irrational beliefs like Creationism first found purchase on the internet only to be undermined and discredited by the efforts of online activists, so too have Holocaust deniers been forced by their presence on the web to justify their beliefs and submit evidence for scrutiny. In so doing the online community has systematically discredited the deniers and undermined their efforts at recruitment. By taking on a stance of net neutrality in the provision of internet and the blocking of sites, governments allow this process to play out and for the free exchange of ideas on which liberal democratic society is built upon to show its strength. [1] A neutral stance upholds the highest principles of the state, and allows people to feel safe in the veracity and representativeness of the internet content they are provided.\n\n[1] Seythal, T. “Holocaust Denier Sentenced to Five Years”. The Washington Post. 15 February 2007, http://www.washingtonpost.com/wp-dyn/content/article/2007/02/15/AR2007021501283.html\n", "title": "" }, { "docid": "d98972f7aaefca51176e165d901a5e3e", "text": "freedom expression house would block access websites deny holocaust The organizers will go underground\n\nA major risk with any extremist organization is that its members, when put under significant legal pressure, will go underground. For example The Pirate Bay, a major bittorrent file sharing website, simply moved to cloud hosting providers around the world to prevent it being shut down. [1] The power of the state to actually stop the development of neo-Nazi and Holocaust denier networks is extremely limited, as they will be able still to organize in secret, or even semi-publicly, via social networks and hidden websites. While their visible profile would be diminished, it would not guarantee any positive gains in terms of stamping down on their numbers. Indeed, when they no longer use public channels it will be ever harder for the government to keep track of their doings and of their leaders. The result of this censorship is a more emboldened, harder to detect group that now has a sense of legitimate grievance and victimhood against the state, which it can use to encourage more extreme acts from its members and can spin to its advantage during recruitment efforts. By leaving them in the open they feel more comfortable acting within the confines of the law and are thus far less dangerous, even if they are more visible.\n\n[1] BBC, “The Pirate Bay moves to the cloud to avoid shutdown”, BBC News, 17 October 2012, http://www.bbc.co.uk/news/technology-19982440\n", "title": "" }, { "docid": "2648348ea88205f0e1d0dc86aaaf2b43", "text": "freedom expression house would block access websites deny holocaust Everyone has a right to freedom of expression\n\nNo matter how unpalatable their opinions may be, everyone should have the right to voice them. The very core of a free society is the right to express one’s mind freely, without hindrance from the state. When the state presumes to judge good speech from bad, and to shut off the channel by which the designated bad speech may flow, it abrogates its duty to its citizens. The government does this by presuming to make value judgments on kinds of speech, and thus empowering itself, and not the people, to be the final arbiter of acceptable speech. Such a state of affairs is anathema to the continuation of a free society. [1] With free speech the all sides will get to voice their views and those whose opinions have most evidence will win out so there is no need for censorship as the marketplace of ideas will prevent ideas without sufficient evidence from having an impact. Furthermore, the particular speech in question is extremely fringe, and is for that reason a very unusual one to be seeking to silence. Speech can be legally curtailed only when there is a very real and manifest harm. But that is not the case here, where the participants are few and scattered, and those who would take exception to what the Holocaust deniers have to say can easily opt out online.\n\n[1] Chomsky, Noam. “His Right to Say it”. The Nation. 28 February 1981, http://www.chomsky.info/articles /19810228.htm\n", "title": "" }, { "docid": "701a10d715989b057bc16379a3bd17c1", "text": "freedom expression house would block access websites deny holocaust Denial of access adds mystique to their beliefs\n\nBy denying people the ability to access sites set up by Holocaust deniers the government serves only to increase their mystique and thus the demand to know more about the movement and its beliefs. When the state opposes something so vociferously that it is willing to set aside the normal freedoms people have come to expect as granted, many people begin to take greater notice. There are always groups of individuals that wish to set themselves up as oppositional to the norms of society, to be transgressive in behavior and thus challenge the entrenched system. [1] When something like Holocaust denial is given that rare mystique of extreme transgression, it serves to encourage people, particularly young, rebellious people to seek out the group and even join it. This has been the case for neo-Nazism in Germany. In Germany Holocaust denial is illegal, yet it has one of the liveliest communities of neo-Nazis in Europe. [2] Their aggressive attacks have only served to boost the movement’s mystique and many have flocked to its banner. By allowing free expression and debate, many people would be saved from joining the barbaric organizations that promote the lies of Holocaust denial.\n\n[1] Gottfried, Ted. Deniers of the Holocaust: Who They Are, What They Do, Why They Do It. Brookfield, CT: Twenty-First Century Books, 2001.\n\n[2] BBC. “Germany’s Neo-Nazi Underground”. BBC News. 7 December 2011, http://www.bbc.co.uk/news/world-europe-16056399\n", "title": "" } ]
arguana
bbe5330ca4775a49610ad518907ab9de
The internet should operate on the basis of net neutrality The internet is a free market of ideas in which all beliefs can be submitted to the whole of the online community and then put to criticism and judgment. In the same way irrational beliefs like Creationism first found purchase on the internet only to be undermined and discredited by the efforts of online activists, so too have Holocaust deniers been forced by their presence on the web to justify their beliefs and submit evidence for scrutiny. In so doing the online community has systematically discredited the deniers and undermined their efforts at recruitment. By taking on a stance of net neutrality in the provision of internet and the blocking of sites, governments allow this process to play out and for the free exchange of ideas on which liberal democratic society is built upon to show its strength. [1] A neutral stance upholds the highest principles of the state, and allows people to feel safe in the veracity and representativeness of the internet content they are provided. [1] Seythal, T. “Holocaust Denier Sentenced to Five Years”. The Washington Post. 15 February 2007, http://www.washingtonpost.com/wp-dyn/content/article/2007/02/15/AR2007021501283.html
[ { "docid": "993c42e697744009cbdac3fcdff17ccc", "text": "freedom expression house would block access websites deny holocaust Taking a neutral stance is a tacit endorsement of the validity of the message being spread as being worthy of discussion. Holocaust denial does not deserve its day in the sun, even if the outcome were a thumping victory for reason and truth. Besides, the Holocaust deniers are not convinced by reason or argument. Their beliefs are impervious to facts, which is why debate is a pointless exercise except to give them a platform by which to spread their message, organize, and legitimize themselves in the marketplace of ideas.\n", "title": "" } ]
[ { "docid": "849c3206fd128a5210115ee7037ef611", "text": "freedom expression house would block access websites deny holocaust Freedom of speech certainly may be curtailed when there is a real harm manifested from it. Holocaust denial, in its refusal to acknowledge one of the most barbaric acts in human history and attempt to justify terrible crimes, is an incredibly dehumanizing force, one that many people suffer from, even if they do not need to read it themselves. We may have the freedom to express ourselves but that does not mean we have the freedom to make up our own facts. The threat Holocaust deniers represent to free society demands that their right to speech online be curtailed.\n", "title": "" }, { "docid": "6332819988487b4394104066bbb1c556", "text": "freedom expression house would block access websites deny holocaust Forcing Holocaust deniers out of the spotlight and underground can only serve the cause of justice. Surveillance efforts can be employed more rigorously if need be, and will be considerably more legitimate to employ against these groups when their actions are acknowledged to be illegal. With them out of the spotlight they are less likely to rope in new recruits among casual, open-minded internet-goers.\n", "title": "" }, { "docid": "9b19713b8311a3140b4af4f17bb69f3a", "text": "freedom expression house would block access websites deny holocaust While some people might be enticed by the mystique of Holocaust deniers as transgressors, far more people will be put off by the firm hand of the state denying them a powerful platform from which to speak. Even if some are enticed these individuals will find it much more difficult to access the information they seek and so only the most determined will ultimately be influenced. Some Holocaust deniers will always lurk in the shadows, but society should show no quarter in the battle for truth.\n", "title": "" }, { "docid": "94fc988a2659eee65cfb8375aac21777", "text": "freedom expression house would block access websites deny holocaust The internet is a flourishing place for discourse because it is absolutely free to all, and everyone accepts and experiences the fruit of that freedom. When the government abandons its stance of neutrality and begins censoring materials, even if it begins only with the nastiest examples, it compromises the copper-fastened liberties that the internet was created to furnish. Many people will abuse that tool, but thankfully people can evade the hate sites easily and never have to experience them without compromising their own freedoms by censoring their opponents.\n", "title": "" }, { "docid": "8b5469b2a5012b34bb1b48d33184335b", "text": "freedom expression house would block access websites deny holocaust Holocaust deniers will always find ways to organize, be it in smaller pockets of face-to-face contact, clandestine social networking, or untraceable black sites online that governments cannot shut down because they cannot find them. The result of blocking these views from the public internet only serves to push their proponents further underground and to make them take less public strategies on board. Ultimately, it is a cosmetic, not substantive solution.\n", "title": "" }, { "docid": "954eba6f4edcd285062eeea494689bea", "text": "freedom expression house would block access websites deny holocaust While it is true that Holocaust deniers spread misinformation and seek to undermine and bend the systems of discourse to be as favorable as possible, they are a tiny fringe minority of opinion, and the number of sites debunking their pseudo-history is far greater than that of the actual deniers. Even young people are able to surf the web with great skill, and can easily see that the Holocaust denial position is fringe in the extreme.\n", "title": "" }, { "docid": "70ce8eea7772645482e107bd7359e6f0", "text": "freedom expression house would block access websites deny holocaust Denying Holocaust-denier their right to speak is a threat to everyone’s freedom of speech. It is essential in a free society that people be able to express their views without fear of reprisal. As Voltaire said, “I disapprove of what you say, but I will defend to the death your right to say it”. As the facts are against the Holocaust deniers their opponents should have no fear of engaging them in open discussion as they will be able to demonstrate how erroneous their opponents are.\n", "title": "" }, { "docid": "d98972f7aaefca51176e165d901a5e3e", "text": "freedom expression house would block access websites deny holocaust The organizers will go underground\n\nA major risk with any extremist organization is that its members, when put under significant legal pressure, will go underground. For example The Pirate Bay, a major bittorrent file sharing website, simply moved to cloud hosting providers around the world to prevent it being shut down. [1] The power of the state to actually stop the development of neo-Nazi and Holocaust denier networks is extremely limited, as they will be able still to organize in secret, or even semi-publicly, via social networks and hidden websites. While their visible profile would be diminished, it would not guarantee any positive gains in terms of stamping down on their numbers. Indeed, when they no longer use public channels it will be ever harder for the government to keep track of their doings and of their leaders. The result of this censorship is a more emboldened, harder to detect group that now has a sense of legitimate grievance and victimhood against the state, which it can use to encourage more extreme acts from its members and can spin to its advantage during recruitment efforts. By leaving them in the open they feel more comfortable acting within the confines of the law and are thus far less dangerous, even if they are more visible.\n\n[1] BBC, “The Pirate Bay moves to the cloud to avoid shutdown”, BBC News, 17 October 2012, http://www.bbc.co.uk/news/technology-19982440\n", "title": "" }, { "docid": "2648348ea88205f0e1d0dc86aaaf2b43", "text": "freedom expression house would block access websites deny holocaust Everyone has a right to freedom of expression\n\nNo matter how unpalatable their opinions may be, everyone should have the right to voice them. The very core of a free society is the right to express one’s mind freely, without hindrance from the state. When the state presumes to judge good speech from bad, and to shut off the channel by which the designated bad speech may flow, it abrogates its duty to its citizens. The government does this by presuming to make value judgments on kinds of speech, and thus empowering itself, and not the people, to be the final arbiter of acceptable speech. Such a state of affairs is anathema to the continuation of a free society. [1] With free speech the all sides will get to voice their views and those whose opinions have most evidence will win out so there is no need for censorship as the marketplace of ideas will prevent ideas without sufficient evidence from having an impact. Furthermore, the particular speech in question is extremely fringe, and is for that reason a very unusual one to be seeking to silence. Speech can be legally curtailed only when there is a very real and manifest harm. But that is not the case here, where the participants are few and scattered, and those who would take exception to what the Holocaust deniers have to say can easily opt out online.\n\n[1] Chomsky, Noam. “His Right to Say it”. The Nation. 28 February 1981, http://www.chomsky.info/articles /19810228.htm\n", "title": "" }, { "docid": "701a10d715989b057bc16379a3bd17c1", "text": "freedom expression house would block access websites deny holocaust Denial of access adds mystique to their beliefs\n\nBy denying people the ability to access sites set up by Holocaust deniers the government serves only to increase their mystique and thus the demand to know more about the movement and its beliefs. When the state opposes something so vociferously that it is willing to set aside the normal freedoms people have come to expect as granted, many people begin to take greater notice. There are always groups of individuals that wish to set themselves up as oppositional to the norms of society, to be transgressive in behavior and thus challenge the entrenched system. [1] When something like Holocaust denial is given that rare mystique of extreme transgression, it serves to encourage people, particularly young, rebellious people to seek out the group and even join it. This has been the case for neo-Nazism in Germany. In Germany Holocaust denial is illegal, yet it has one of the liveliest communities of neo-Nazis in Europe. [2] Their aggressive attacks have only served to boost the movement’s mystique and many have flocked to its banner. By allowing free expression and debate, many people would be saved from joining the barbaric organizations that promote the lies of Holocaust denial.\n\n[1] Gottfried, Ted. Deniers of the Holocaust: Who They Are, What They Do, Why They Do It. Brookfield, CT: Twenty-First Century Books, 2001.\n\n[2] BBC. “Germany’s Neo-Nazi Underground”. BBC News. 7 December 2011, http://www.bbc.co.uk/news/world-europe-16056399\n", "title": "" }, { "docid": "2e9eb217fb29343fce0a7e603f4e1b1a", "text": "freedom expression house would block access websites deny holocaust Holocaust denial sites are an attack on group identities\n\nThe internet is the center of discourse and public life in the 21st century. With the advent of social networks, people around the world live more and more online. Unlike any other kind of hateful speech that might flourish on the internet, Holocaust denial stands apart. This is due firstly to the particular mark that the Holocaust has made on the collective consciousness of western civilization as the ultimate act of human evil and depravity. The Holocaust is now a defining part of Jewish identity, denying it attacks all those who suffered and their decedents. Allowing Holocaust denial websites is allowing the rejection of groups’ very identity. Thus its apologists do far more harm than any troll, misogynist, or even apologist of other atrocities. For this reason, the government can justifiably censor sites promoting these absolutely offensive beliefs while not falling down any sort of slippery slope. The second reason Holocaust denial stands apart from other sorts of internet abuse is that these sites are often flashpoints for violence materializing in the real world. More than just talk, neo-Nazis seek dangerous action, and thus the state should be doubly ready to remove this threat from the internet. [1] Accepting that Holocaust deniers have a point that should be articulated across the internet would be helping these neo-nazi groups gain a foothold. The particularly grievous nature of the Holocaust demands the protection of history to the utmost.\n\n[1] BBC. “Germany’s Neo-Nazi Underground”. BBC News. 7 December 2011, http://www.bbc.co.uk/news/world-europe-16056399\n", "title": "" }, { "docid": "0a76b33e6d4658966cd339ba26ed6db0", "text": "freedom expression house would block access websites deny holocaust The freedom of Holocaust deniers to use to the internet legitimizes their organization and message in eyes of consumers\n\nWhen the internet places no moral judgments on content, and the gatekeepers let all information through on equal footing, it lends an air of legitimacy that these beliefs have a voice, and that they are held by reasonable people. This legitimacy is enhanced by the anonymity of the internet where deniers can pose as experts and downplay their opponents’ credentials. While the internet is a wonderful tool for spreading knowledge, it can also be subverted to disseminate misinformation. Holocaust deniers have been able to use the internet to a remarkable extent in promoting pseudoscience and pseudo-history that have the surface appearance of credibility. [1] Compounding this further, the administrators of these sites are able to choke of things like dissenting commenters, giving the illusion that their view is difficult, or even impossible to reasonably challenge. They thus create an echo chamber for their ideas that allows them to spread and to affect people, particularly young people susceptible to such manipulation. By denying these people a platform on the internet, the government is able to not only make a moral stance that is unequivocal, but also to choke off access to new members who can be saved by never seeing the negative messages.\n\n[1] Lipstadt, Deborah. Denying the Holocaust: The Growing Assault on Truth and Memory. New York: Free Press, 1993.\n", "title": "" }, { "docid": "c699c1d32dbf1653b62471775a9896a9", "text": "freedom expression house would block access websites deny holocaust Governments should not allow forums for hate speech to flourish\n\nDenial of the Holocaust is fundamentally hate speech. It is the duty of the government to deny these offensive beliefs a platform of any kind. [1] By blocking these sites, the government denies a certain freedom of speech, but it is a necessarily harmful form of speech that has no value in the market place of ideas. Many people, often Jews, but also members of other discriminated against minorities like Roma, suffer directly from the speech, feeling not only offended, but physically threatened by such denials. Holocaust denial however goes beyond hate speech because it is not only offensive but factually wrong. The attempt to rewrite history and to sow lies causes a threat to the truth and an ability to co-opt the participation of gullible individuals to their cause that mere insults and demagoguery could not. It represents a threat to education by undermining the value of facts and evidence. For this reason, there is essentially no real loss of valuable speech in censoring the sites denying the Holocaust.\n\n[1] Lipstadt, Deborah. Denying the Holocaust: The Growing Assault on Truth and Memory. New York: Free Press, 1993.\n", "title": "" }, { "docid": "19183b18a4026e97e324f7ba3c63a209", "text": "freedom expression house would block access websites deny holocaust A ban would stop Holocaust deniers from engaging in effective real world actions\n\nThe greatest fear with hate groups is not just their hateful rhetoric online, but also their ability to take harmful action in the real world. When Holocaust deniers are able to set up standard websites, they have the ability to mobilize action on the ground. This means coordinating rallies, as well as acts of hooliganism and violence. One need only look at the sort of organization the Golden Dawn, a neo-fascist Greek party, has been able to develop in part through active use of social media and websites. [1] By capitalizing on the tools of the 21st century these thugs have succeeded in bringing sympathizers to their cause, often geographically diffuse, into a tight-knit community capable of action and disruption that harms all citizens, but particularly the minority groups they are presently fixated upon. By utilizing social media and websites Holocaust deniers have gained a new lease on life. The government can significantly hamper these organizations from taking meaningful actions, and from coalescing in the first place by denying them their favored and most effective platform.\n\n[1] Savaricas, Nathalie, “Greece’s neo-fascists are on the rise... and now they’re going into schools: How Golden Dawn is nurturing the next generation”, The Independent, 2 February 2013, http://www.independent.co.uk/news/world/europe/greeces-neofascists-are-on-the-rise-and-now-theyre-going-into-schools-how-golden-dawn-is-nurturing-the-next-generation-8477997.html\n", "title": "" } ]
arguana
27961f31f78b24ce6e3b4ab83522a6d6
Other parental controls are more practical and reasonable to administer. Monitoring would be extremely tedious and time-consuming. Many teens send over 100 texts a day, it would clearly be very time consuming to read them all along with all other digital communication.[1] By contrast content filtering, contact management, and privacy protection parental controls, which can be used to block all incoming and outgoing information, require only minimal supervision. Parents who meanwhile deem their children immature when it comes to social networking and gaming can instead impose user restrictions on the relevant websites and devices. [2] Administering these alternative parental controls leave for more quality time with children. In this case, only when children acquire sufficient digital maturity and responsibility can these controls be lifted. As they have learnt to be mature in the digital environment the children would most likely continue to surf safely even when the parental controls are lifted. [1] Goldberg, Stephanie, “Many teens send 100-plus texts a day, survey says”, CNN, 21 April 2010 [2] Burt, David. “Parental Controls Product Guide.” 2010 Edition. n.d. PDF File. Web. May 2013.
[ { "docid": "9b6e74e61296630cb478bb4b6242b38d", "text": "society family youth digital freedoms privacy house would allow parents While it is practical to use these parental controls, it is not always realistic to set such limited parameters to the digital freedom of children. Children need to understand that they have the capacity to breach their parents’ trust. [1] This not only allows a child to understand how to interact sensibly with the internet, but to experience taking an initiative to actually obey parents in surfing only safe sites. Selectively restricting a child’s digital freedom does not help in this case. Thus, monitoring is the only way for children to experience digital freedom in such a way that they too are both closely guided and free to do as they wish. Moreover, this is also self-contradictory because opposition claimed that children are capable of circumvention which children would be much more likely to do when blocked from accessing websites than simply monitored.\n\n[1] Shmueli, Benjamin, and Ayelet Blecher-Prigat. “Privacy for Children.” Columbia Human Rights Review. Rev. 759 (2010-2011): 760-795. Columbia Law School. Web. May 2013.\n", "title": "" } ]
[ { "docid": "8c1fade690a053445bcc9308cf185a3a", "text": "society family youth digital freedoms privacy house would allow parents The individual right to privacy must certainly encompass the digital realm as proposition says. It is also undeniable that individual privacy enhances individuality and independence. However, this privacy can and should be regulated lest parents leave children ‘abandoned’ to their rights. [1] “One cannot compare reading a child’s journal to accessing his or her conversations online or through text messages,” says Betsy Landers, the president of the National Parent-Teacher Association of the US and explains, “It’s simply modern involvement.” [2] Thus, Hillary Clinton argues, “children should be granted rights, but in a stage-by-stage manner that accords with and pays attention to their physical and mental development and capacities.” [1] Applying this principle, children should be given digital privacy to an equitable extent and regulated whereby both conditions depend upon the maturity of the child.\n\n[1] Shmueli, Benjamin, and Ayelet Blecher-Prigat. “Privacy for Children.” Columbia Human Rights Review. Rev. 759 (2010-2011): 760-795. Columbia Law School. Web. May 2013.\n\n[2] Landers, Betty. “It’s Modern Parental Involvement.” New York Times. 28 June 2012: 1. New York Times. May 2013.\n", "title": "" }, { "docid": "c23f57def7ef1f081e76cf127631839c", "text": "society family youth digital freedoms privacy house would allow parents It is true that trust is a cornerstone of relationships. Admittedly, the act of monitoring may initially stimulate feelings of distrust which are particularly destructive in relationships. But nonetheless, trust is earned, not granted. The only proactive way to gauge how much trust and responsibility to give a child in the digital world is monitoring. By monitoring a child, parents come to assess the initial capability of the child in digital responsibility and ultimately the level of trust and the level of responsibility he or she deserves and to be assigned subsequently. Ideally, the initial level of monitoring and follow-through should be maximum in order to make clear to the child that he is being guided. Only when a child proves himself and grows in digital maturity can monitoring and follow-through be gradually minimized and finally lifted. [1]\n\n[1] Bodenhamer, Gregory. Parents in Control. New York: Simon &amp; Schuster, 1995 Inc. Web. May 2013.\n", "title": "" }, { "docid": "99b4a38eab5bfd11fd51b53ed8c83826", "text": "society family youth digital freedoms privacy house would allow parents Opposition claims that monitoring is ‘laziness’. Admittedly, monitoring makes digital parenting more efficient and comprehensive. But, such technology makes parenting practical, not ‘lazy’. As it is, many people blame technology for their own shortcomings. [1] Thus, parents need to know that monitoring will not do all the work for them. It is not lazy to monitor your children, it is clearly essential that children are monitored when involved in activities such as sports. The internet is a dangerous environment just as the sports field is and should have similar adult supervision.\n\n[1] Bradley, Tony. “Blaming Technology for Human Error: Trying To Fix Social Problems With Technical Tools.” About. About. 30 Mar 2005.\n", "title": "" }, { "docid": "338734b598e5c0c172cd065a5c5168be", "text": "society family youth digital freedoms privacy house would allow parents Certainly parents should help their children to make most of their time with the computer and their phone. However, monitoring children in order to do so is lazy, or more precisely a form of ‘remote-control parenting’. Parents abuse of their children’s inherent right to privacy and feel that they have satisfactorily fulfilled their parental role when instead they are just lazy and unwilling to talk to their child personally about being a responsible netizen. [1] How are children to develop a healthy relationship to sharing information and privacy protection if they are constantly being surveilled by their own parents? More effective parents would instead choose to personally and positively teach their children about time management.\n\n[1] Shmueli, Benjamin, and Ayelet Blecher-Prigat. “Privacy for Children.” Columbia Human Rights Review. Rev. 759 (2010-2011): 760-795. Columbia Law School. Web. May 2013.\n", "title": "" }, { "docid": "c09f65594010f6a506c0f4a5c9baae57", "text": "society family youth digital freedoms privacy house would allow parents While it is certainly beneficial for parents to immerse themselves in the digital world, it may not be good for them to be partially and informally educated by simple monitoring. Especially for parents who are not already familiar with the internet, monitoring may simply condition them to a culture of cyberstalking and being excessively in control of the digital behavior of their children. As it is, a number of children have abandoned Facebook because they feel that their parents are cyberstalking them. [1] Besides, there are other ways of educating oneself regarding ICT which include comprehensive online and video tutorials and library books that may cater to an unfamiliar parent’s questions about the digital world.\n\n[1] “Kids Are Abandoning Facebook To Flee Their Cyber-Stalking Parents.” 2 Oceans Vibe News. 2 Oceans Vibe Media. 11 Mar 2013. Web. May 2013\n", "title": "" }, { "docid": "5da4f42b2b706bd8f9744042b5fa6448", "text": "society family youth digital freedoms privacy house would allow parents Indeed it is important to consider that children do not receive or send sexually disturbing media. However, as proposition has already stated parents are much less likely to be digitally savvy than their children. Should they wish to learn children are likely to be able to penetrate any elaborate digital monitoring set by a parent. As it is, Defcon, one of the world’s largest hacker conventions, is already training 8- to 16-year olds to hack in a controlled environment. [1] That pornography is so widely available and so desirable is the product of a culture the glorifies sexuality and erotic human interaction. The effects on childrens well-being are by no means clear, indeed it can be argued that much of what parents are no able to communicate to their children in the way of sexual education is communicated to them through Internet pornography. While this brings with it all manner of problems, aside from the outrage of their parents there is little scientific data to suggest that mere exposure to pornography is causing wide-scale harm to children. Instead, it may be that many of the ‘objects’ of these debates on the rights of children are themselves quite a bit more mature than the debates would suggest..\n\n[1] Finkle, Jim. “Exclusive: Forget Spy Kids, try kiddie hacker conference.” Reuters. Thomas Reuters. 23 Jun 2011. Web. May 2013.\n", "title": "" }, { "docid": "fef373dc9e6b03c7f1271099bd1482c2", "text": "society family youth digital freedoms privacy house would allow parents While cyberbullying is indeed a danger to children, it is not an excuse to invade their personal life-worlds. The UNCRC clearly states that “(1) No child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour and reputation,” and that, “(2) The child has the right to the protection of the law against such interference or attack.” These ‘interferences’ or ‘attacks’ not only apply to third parties but to parents as well. [1] Moreover in less traditional ‘offline’ spaces children have far greater ability to choose which information they share with their parents and what they do not. As online spaces are not inherently more dangerous than those offline, it seems reasonable to suggest that similar limitations and restrictions on invasions of privacy that apply online should also apply offline. What a parent can do is to be there for their children and talk to them and support them. They should also spend time surfing the Internet together with them to discuss their issues and problems. But the child should always also have the opportunity to have his or her own protected and private space that is outside the every watchful surveilant eye of the parent..\n\n[1] United Nations Children’s Fund. Implementation Handbook for the Convention on the Rights of the Child. Fully revised 3rd edition. Geneva. United Nations Publications. Google Search. Web. May 2013.\n", "title": "" }, { "docid": "a0ece19575be99a9524d96a63d59f009", "text": "society family youth digital freedoms privacy house would allow parents Monitoring is lazy parenting.\n\nThe proposition substitutes the good, old-fashioned way of teaching children how to be responsible, with invasions of their privacy, so violating an inherent rights [1]. Such parenting is called remote-control parenting. Parents who monitor their children’s digital behavior feel that they satisfactorily fulfil their parental role when in fact they are being lazy and uninvolved in the growth of their child. Children, especially the youngest, are “dependent upon their parents and require an intense and intimate relationship with their parents to satisfy their physical and emotional needs.” This is called a psychological attachment theory. Responsible parents would instead spend more time with their children teaching them about information management, when to and when not to disclose information, and interaction management, when to and when not to interact with others. [2] That parents have the ability to track their children is true, but doing so is not necessarily likely to make them better adults [3]. The key is for parents and children to talk regularly about the experiences of the child online. This is a process that cannot be substituted by parental monitoring.\n\n[1] United Nations Children’s Fund. Implementation Handbook for the Convention on the Rights of the Child. Fully revised 3rd edition. Geneva. United Nations Publications. Google Search. Web. May 2013.\n\n[2] Shmueli, Benjamin, and Ayelet Blecher-Prigat. “Privacy for Children.” Columbia Human Rights Review. Rev. 759 (2010-2011): 760-795. Columbia Law School. Web. May 2013.\n\n[3] “You Can Track Your Kids. But Should You?” New York Times. 27 June 2012: 1. New York Times. May 2013.\n", "title": "" }, { "docid": "2e71cad9b6b08473d6a4a58bd5d74817", "text": "society family youth digital freedoms privacy house would allow parents Monitoring is a hindrance to forming relationships both outside and inside the family.\n\nIf children are being monitored, or if it seems to children that they are being monitored, they would immediately lose trust in their parents. As trust is reciprocal, children will also learn not to trust others. This will result in their difficulty in forging human connections, thereby straining their psychosocial growth. For them to learn how to trust therefore, children must know that they can break their parents’ trust (as said by the proposition before). This will allow them to understand, obey, and respect their parents on their own initiative, allowing them to respect others in the same manner as well. [1] This growth would only be possible if parents refuse this proposition and instead choose to educate their children how to be responsible beforehand.\n\n[1] Shmueli, Benjamin, and Ayelet Blecher-Prigat. “Privacy for Children.” Columbia Human Rights Review. Rev. 759 (2010-2011): 760-795. Columbia Law School. Web. May 2013.\n", "title": "" }, { "docid": "06ea4e09ea79c903a622d0b8179b5fe3", "text": "society family youth digital freedoms privacy house would allow parents This proposal is simply an invasion of privacy.\n\nChildren have as much right to privacy as any adult. Unfortunately there is yet to be a provision on the protection of privacy in either the United States Constitution or the Bill of Rights, though the Supreme Court states that the concept of privacy rooted within the framework of the Constitution. [1] This ambiguity causes confusion among parents regarding the concept of child privacy. Many maintain that privacy should be administered to a child as a privilege, not a right. [2] Fortunately, the UNCRC clearly states that “No child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour and reputation,” [3] making child privacy an automatic right. Just as children should receive privacy in the real world, so too should they in the digital world. Individual rights, including right to privacy, shape intrafamilial relationships because they initiate individuality and independence. [1]\n\n[1] Shmueli, Benjamin, and Ayelet Blecher-Prigat. “Privacy for Children.” Columbia Human Rights Review. Rev. 759 (2010-2011): 760-795. Columbia Law School. Web. May 2013. P.764\n\n[2] Brenner, Susan. “The Privacy Privilege.” CYB3RCRIM3. Blogspot. 3 April 2009. May 2013.\n\n[3] United Nations Children’s Fund. Implementation Handbook for the Convention on the Rights of the Child. Fully revised 3rd edition. Geneva. United Nations Publications. Google Search. Web. May 2013.\n", "title": "" }, { "docid": "b3da208b5758fd8d9e515ab8f893de06", "text": "society family youth digital freedoms privacy house would allow parents Monitoring allows parents to correct children who are wasting their time.\n\nParents also need to monitor their children to ensure that they are properly using the time they have with the computer and the mobile phone. According to the Kaiser Family Foundation 40% of 8- to 18-year olds spend 54 minutes a day on social media sites.[1] and that “when alerted to a new social networking site activity, like a new tweet or Facebook message, users take 20 to 25 minutes on average to return to the original task” resulting to 20% lower grades. [2] Thus, parents must constantly monitor the digital activities of their children and see whether they have been maximizing the technology at their disposal in terms of researching for their homework, connecting with good friends and relatives, and many more.\n\n[1] Foehr, Ulla G., Rideout, Victoria J., and Roberts, Donald F., “Generation M2 Media in the Lives of 8- to 18-Year-Olds”, The Kaiser Family Foundation, January 2010, p.21\n\n[2] Gasser, Urs, and Palfrey, John, “Mastering Multitasking”, Association for Supervision and Curriculum Development, March 2009, p.17\n", "title": "" }, { "docid": "5cd19a718ad89cd2b89ada7d18df3313", "text": "society family youth digital freedoms privacy house would allow parents Monitoring decreases children’s involvement with pornography.\n\nA 2005 study by the London School of Economics found that “while 57 per cent of the over-nines had seen porn online, only 16 per cent of parents knew.” [1] That number is almost certain to have increased. In addition sexting has also become prevalent as research from the UK suggests “over a third (38%) [of] under 18’s have received an offensive or distressing sexual image via text or email.” [2] This is dangerous because this digital reality extends to the real world. [3] W.L. Marshall says that early exposure to pornography may incite children to act out sexually against other children and may shape their sexual attitudes negatively, manifesting as insensitivity towards women and undervaluing monogamy. Only with monitoring can parents have absolute certainy of what their children are doing on the Internet. It may not allow them to prevent children from viewing pornography completely, but regulating the digital use of their children in such a way does not have to limit their digital freedoms or human rights.\n\n[1] Carey, Tanith. “Is YOUR child watching porn? The devastating effects of graphic images of sex on young minds”. Daily Mail. Daily Mail and General Trust. 25 April 2011. Web. May 2013.\n\n[2] “Truth of Sexting Amongst UK Teens.” BeatBullying. Beatbullying. 4 Aug 2009. Web. May 2013.\n\n[3] Hughes, Donna Rice. Kids Online: Protecting Your Children in Cyberspace. Michigan: Fleming H. Revell, 1998. ProtectKids. Web. May 2013.\n", "title": "" }, { "docid": "9cbe429588e6cfcdf30337a807d1640d", "text": "society family youth digital freedoms privacy house would allow parents Monitoring prevents cyberbullying.\n\nSocial approval is especially craved by teens because they are beginning to shift focus from family to peers. [1] Unfortunately, some teens may resort to cyberbullying others in order to gain erroneous respect from others and eliminate competitors in order to establish superficial friendships. Over the last few years a number of cyberbullying cases have caused the tragic suicides of Tyler Clementi (2010), Megan Meier who was bullied online by a non-existent Josh Evans whom she had feelings for (2006), and Ryan Halligan (2003) among others. [2] Responsible parents need to be one step ahead because at these relevant stages, cognitive abilities are advancing, but morals are lagging behind, meaning children are morally unequipped in making informed decisions in cyberspace. [1] One important way to make this guidance more effective would be if parents chose to monitor their children’s digital behavior by acquiring their passwords and paying close attention to their social network activity such as Facebook and chat rooms, even if it means skimming through their private messages. Applying the categorical imperative, if monitoring becomes universal, then cyberbullying will no longer be a problem in the cyberspace as the perpetrators would be quickly caught and disciplined.\n\n[1] Bauman, Sheri. Cyberbullying: a Virtual Menace. University of Arizona, 2007. Web. May 2013.\n\n[2] Littler, Chris. “8 Infamous Cases of Cyber-Bullying.” The Sixth Wall. Koldcast Entertainment Media. 7 Feb 2011. Web. May 2013 .\n", "title": "" }, { "docid": "52daa5a1b5efe02df4eb857ca52e1522", "text": "society family youth digital freedoms privacy house would allow parents Monitoring raises digital awareness among parents.\n\nParents who are willing to monitor their children’s digital communications also benefit themselves. By setting up the necessary software and apps to secure their children’s online growth, parents familiarize themselves with basic digital skills and keep up with the latest in social media. As it stands there is a need to raise digital awareness among most parents. Sonia Livingston and Magdalena Bober in their extensive survey of the cyber experience of UK children and their parents report that “among parents only 1 in 3 know how to set up an email account, and only a fifth or fewer are able to set up a filter, remove a virus, download music or fix a problem.” [1] Parents becoming more digitally involved as a result of their children provides the added benefit of increasing the number of mature netizens so encouraging norms of good behavior online.\n\n[1] Livingstone, Sonia, and Magdalena Bober. “UK Children Go Online: Surveying the experiences of young people and their parents.” UK Children Go Online. Second Report (2004): 1-61.\n", "title": "" } ]
arguana
9ec5621a2d076652036ac3af0a4285ac
The minimum wage is little more than a political tool that ultimately harms the overall economy by raising the unemployment rate and driving businesses elsewhere Politicians have transformed the minimum wage into an indicator of social development. Governments often cite their raising of the minimum wage as an example of their commitment to fostering social justice and equality. This is all nonsense. The minimum wage is nothing more than a useful, simple tool that politicians can exploit without addressing underlying social and economic ills in society. [1] During times of economic expansion wages are generally rising as new businesses are formed and existing firms take on more capacity and workers. During such times, raising the minimum wage has no effect other than being a useful political move. In times of economic contraction, firms close and lay off workers and unemployment rates rise. In such times, the minimum wage hampers the market from clearing, keeping more people out of work than necessary. For markets to function efficiently, wages must be allowed to fluctuate freely, equilibrating with demand for labor and reflecting the macroeconomic situation. Minimum wages tend to lock in wages at pre-recession levels making countries less competitive and less quick to recover when economic downturns occur. Furthermore, minimum wages can often make countries unattractive for businesses to invest in, as the cost of hiring workers can serve as a serious disincentive. For this reason, businesses tend to locate in countries with no minimum wage laws, such as Germany, or where they are comparably low. In order to stay competitive, to bolster economic dynamism and gain global competitiveness, countries should treat labor like the commodity it is and allow the labor market to self-correct, and not institute minimum wage laws. [1] Dorn, Minimum Wage Socialism, 2010
[ { "docid": "9e69f6136617a2fb61f5029fcca2474c", "text": "business economic policy economy general house believes national minimum wage While economies may bounce back somewhat less quickly from downturns if wages are prevented from falling beneath a set minimum, it is a worthwhile sacrifice for the sake of preventing the exploitation of workers. The minimum wage is particularly important to uphold in times of recession, since increased unemployment encourages employers to slash wages unmercifully. Such reductions can severely harm individuals and families that often suffer from reductions in real wealth as a result of recessions. Furthermore, in the case of competitiveness, companies do not make their decisions of where to locate based solely on prevailing wage rates. Rather, they value educated, socially stable populations. A minimum wage ensures that working individuals have the resources to provide for the necessities of their families and tends to promote social stability and contentment by engendering feelings of social buy-in that are absent in the presence of exploitation and meager wages. [1] Furthermore, it is not clear that the minimum wage has a significantly detrimental impact on employment. [2]\n\n[1] Waltman, The Politics of the Minimum Wage, 2000\n\n[2] Allegretto et al, Do Minimum Wages Really Reduce Teen Employment?, 2011\n", "title": "" } ]
[ { "docid": "da87a192bae8b102d6bdc6acfb10d22f", "text": "business economic policy economy general house believes national minimum wage The state has an obligation to protect people from making bad decisions. Just as it tries to protect people from the harms of drugs by making them illegal, the state protects people from exploitation by setting wages at a baseline minimum. Everyone deserves a living wage, but they will not get this if there is no minimum wage. Businesses ruthlessly seeking to increase profit margins will always seek to reduce wages. This behavior is particularly harmful to those who receive the lowest wages. Upholding the right to work for any wage does not give people on the lowest wages a real choice, since it means people must work for what they are given, resulting in terrible exploitation. [1] Clearly, the minimum wage is a necessary safeguard for the protection of the weak and the vulnerable, and to guard people from unconscionable choices that an absolute right to work would force. Furthermore, the right to work does not mean much if an individual can only find employment in jobs which pay so lowly that they cannot support themselves. Thus, there is little difference between being employed below the minimum wage and being unemployed at the minimum wage. When employed, a person is no longer on unemployment statistics and the government has less pressure to act. When unemployed, they have the incentive and time to campaign for government action.\n\n[1] Waltman, The Politics of the Minimum Wage, 2000\n", "title": "" }, { "docid": "8a367c8417fcf930b3141afec35a3cc7", "text": "business economic policy economy general house believes national minimum wage Businesses are concerned with their bottom line. They will pay workers as little as possible in order to maximize profits. Certainly in some businesses employers require highly skilled workers for which they will be willing to pay competitive wages. However, the people who most require worker protection, those on minimum wage, are generally unskilled and interchangeable with a large body of potential employees. For this reason there is little impetus to pay workers at the lowest echelons of firms anything but the lowest possible wages. Even if some firms are willing to offer comparatively higher wages to entice honest and diligent non-skilled workers, the overall wage schedule will be depressed as far as is economically possible.\n", "title": "" }, { "docid": "f67beaf412f81b321a9e98e74b4104c0", "text": "business economic policy economy general house believes national minimum wage An individual can maintain little dignity when he is subjected to outright exploitation from employers who are unconcerned about their welfare and who have no incentive to pay them anything but the lowest possible wages. A minimum wage ensures that people who find employment can feel real self-worth. Furthermore, if people do indeed only feel self-fulfilled when they are employed, people will be all the more likely to accept poor working conditions and low wages for sake of their self-image. Also, young workers do have means of gaining experience, such as through unpaid internship programs. The minimum wage serves to protect workers of all ages and skill-levels, as no one deserves to be exploited.\n", "title": "" }, { "docid": "0aaec8337a8bd3c3dbb879239f7d16b5", "text": "business economic policy economy general house believes national minimum wage While it is of course socially desirable that everyone be able to find gainful employment and pursue happiness, this is not accomplished even remotely by the existence of a minimum wage. In fact, it denies more people the ability to pursue happiness because the minimum wage forces unemployment up as it becomes more expensive to hire workers. The choice to work should belong to the individual, whether his decisions have an effect on the wages of others or not. Individuals can only have control of their destinies when they are not limited in the range of their potential actions, which must include the right to sell their labor at whatever rate they find acceptable, be it at some arbitrary minimum or lower.\n", "title": "" }, { "docid": "4a5e839af8bc20c25097e6b788a4a0e3", "text": "business economic policy economy general house believes national minimum wage The incentive to enter the illicit market is actually higher when there is a minimum wage. While the relative advantage of entering the black market might be diminished for some who can enter the legitimate workforce and find employment, the higher numbers of people now unemployed would find it necessary either to seek welfare payments from the government or find alternative employment. Such employment could be readily found in the illegal market.\n", "title": "" }, { "docid": "6e1606adab31810db92c4a76d1d814d2", "text": "business economic policy economy general house believes national minimum wage Employers are not stupid. Many do see the value of higher paid workers and appreciate their harder work and dedication. That is exactly why a minimum wage is unnecessary; firms in pursuit of their own self-interest will pay workers competitive wages. Furthermore, social welfare payments will not decrease with the advent of a minimum wage since while some workers will not require income supplements from the state, the higher numbers of unemployed workers will look to the state exclusively as their source of income, raising the cost to the state and the taxpayer.\n", "title": "" }, { "docid": "753f0d78a8bbf08468cdbcc5f012313c", "text": "business economic policy economy general house believes national minimum wage There is no social justice in denying people the ability to work. The minimum wage serves to benefit insiders who are employed and harm outsiders who do not have jobs and cannot get them due to the dearth of jobs created by the wage laws. [1] The state may have the best interests of its citizens at heart when it institutes a minimum wage, but it accomplishes little when it leaves more of its citizens without work, and thus dependent upon the state for survival.\n\n[1] Dorn, Minimum Wage Socialism, 2010\n", "title": "" }, { "docid": "1a59aeedccc7c3c57ada35453fbc4ee6", "text": "business economic policy economy general house believes national minimum wage Individuals gain a sense of dignity from employment, as well as develop human capital, that can be denied them by a minimum wage\n\nThe ability to provide for oneself, to not be dependent on handouts, either from the state in the form of welfare or from citizens’ charity, provides individuals with a sense of psychological fulfillment. Having a job is key to many people’s self worth, and most capitalist-based societies place great store in an individual’s employment. Because the minimum wage denies some people the right to work, it necessarily leaves some people unable to gain that sense of fulfillment. [1] When people are unemployed for long stretches of time, they often become discouraged, leaving the workforce entirely. When this happens in communities, people often lose understanding of work entirely. This has occurred in parts of the United States, for example, where a cycle of poverty created by a lack of job opportunities has generated a culture of dependence on the state for welfare handouts. This occurrence, particularly in inner cities has a seriously corrosive effect on society. People who do not work and are not motivated to work have no buy-in with society. This results in crime and social disorder. Furthermore, the minimum wage harms new entrants to the workforce who do not have work experience and thus may be willing to work for less than the prevailing rate. This was once prevalent in many countries, often taking the form of apprenticeship systems. When a minimum wage is enforced, it becomes more difficult for young and inexperienced workers to find employment, as they are comparatively less desirable than more experienced workers who could be employed for the same wage. [2] The result is that young people do not have the opportunity to develop their human capital for the future, permanently disadvantaging them in the workforce. The minimum wage takes workers’ dignity and denies them valuable development for the future.\n\n[1] Dorn, Minimum Wage Socialism, 2010\n\n[2] Butler, Scrap the Minimum Wage, 2010\n", "title": "" }, { "docid": "ca1e0b0fd5c2ebdd3c6cd75a6ce867db", "text": "business economic policy economy general house believes national minimum wage The free market tends to treat workers fairly\n\nIn the absence of a minimum wage the free market will not tend toward the exploitation of workers. Rather, wages will reflect the economic situation of a country, guaranteeing that employment will be at the highest possible rate, and not be hampered by an artificial minimum. Some incomes may fall, but overall employment will rise, increasing the general prosperity of the country. [1] Employers understand that high pay promotes hard work. Businesses will not simply slash wages in the absence of a minimum wage, but will rather compete with one another to coax the best and most dedicated workers into their employ. This extends even into the lowest and least-skilled lines of work, as although workers may be largely interchangeable in terms of skill, they are distinct in their level of dedication and honesty. There is thus a premium at all levels of a business to hire workers at competitive wages. Furthermore, employers also take into account that there is a social safety net in virtually every Western country that prevents unemployed workers from starving or losing the barest standard of living. For this reason, wages can never fall below the level of welfare payments, as individuals will necessarily withhold their labor if they can receive the same or better benefit from not working at all than from being employed. Clearly, businesses will seek to employ the best workers and will thus offer competitive wages.\n\n[1] Newmark and Wascher, Minimum Wages, 2010\n", "title": "" }, { "docid": "b2228c3b8b2a98d785ccecabf3e4b739", "text": "business economic policy economy general house believes national minimum wage The minimum wage restricts an individual’s fundamental right to work\n\nIndividuals are autonomous beings, capable of making decisions for themselves. This includes the ability to make a value judgment about the value of one’s time and ability. If an individual wishes to sell his labor for a certain price, then he should not be restricted from doing so by the state. A minimum wage is in effect the government saying it can place an appropriate value on an individual, but an individual cannot value himself, which is an absurdity as the individual, who knows himself better than the state ever could, has a better grasp of the value of his own labor. At the most basic level, people should have their right to choice maximized, not circumscribed by arbitrary government impositions. When the state denies individuals the right to choose to work for low wages, it fails in its duty of protection, taking from individuals the right to work while giving them nothing in return other than the chimerical gift of a decent wage, should they ever be able to find a job. [1] Clearly, the minimum wage is an assault on the right to free choice.\n\n[1] Butler, Scrap the Minimum Wage, 2010\n", "title": "" }, { "docid": "6d20440b8a99cb54e571f18ae5388ee6", "text": "business economic policy economy general house believes national minimum wage The minimum wage provides a baseline minimum allowing people to embark freely in the pursuit of happiness\n\nWithout a minimum wage, the lowest paid members of society are relegated to effective serfdom, and their decisions of these members often force others to follow suit, accepting similarly low wages. There is no real freedom of choice for people at this lowest level of the social structure, since they must accept whatever wage is offered in order to feed themselves and their families. Their poverty and desperation for work makes it much more difficult for them to act collectively to bargain for better wages. The minimum wage frees people from this bondage and guarantees them resources with which to make meaningful choices. [1] Without resources there can be no true choice, as all choices would be coerced by necessity. Because people’s choices are intrinsically interconnected, and wages tend to reflect the prevailing pressures of demand and supply, when an individual makes the choice to work for less than anyone else, he necessarily lowers the wage that others can ask, leading to a downward spiral of wages as workers undercut one another, each competing to prove he is worth the least. A minimum wage ensures workers do not harm each other through self-destructive wage competition. [2] What the minimum wage does to alleviate these problems is that it gives individuals the ability to pursue the good life, something that has become a global ideal. People want to be happy, and find that only way to obtain the resources necessary to attain comfort and security is through employment. Fundamentally, the minimum wage grants the freedom not to be exploited, giving individuals the freedom to control their own destinies.\n\n[1] Waltman, The Politics of the Minimum Wage, 2000\n\n[2] Hillman, Public Finance and Public Policy: Responsibilities and Limitations of Government, 2009\n", "title": "" }, { "docid": "893be2d8b0ae3148f8d4f283d047c85f", "text": "business economic policy economy general house believes national minimum wage Higher wages boost economic growth\n\nEmployees work harder when they are paid more, but employers can often be more concerned with the short-term bottom line and will not treat workers in the lowest echelons of their firms with much consideration, viewing them instead as disposable and replaceable economic units. [1] Mandating a minimum wage can thus benefit firms, even if they do not recognize it, by making workers more productive and also fostering a general work ethic. [2] As workers feel more valued in the economic system, the more likely they are to work loyally and diligently for their employers. Furthermore, better pay means more disposable income in the hands of employees, which leads to greater demand by them for goods and services. This demand-induced economic growth is a very important part of economic growth. The more people are able to spend, the more money flows into the economy, leading to more business and higher employment. Without the minimum wage, a downward spiral of spending can ensue, proving deleterious to firms and the economy generally. Additionally, the minimum wage decreases expensive social welfare payments, since workers no longer need as many supplements to their wages from the state in order to make up for the shortfall created by too-low wages.\n\n[1] Freeman, Minimum Wages – Again!, 1994\n\n[2] Filion, EPI’s Minimum Wage Issue Guide, 2009\n", "title": "" }, { "docid": "7c018edc0668fd95cf2e93bdd6ba35c8", "text": "business economic policy economy general house believes national minimum wage The minimum wage aids in the propagation of social justice and the fair treatment of workers\n\nBusinesses operating in a free market are concerned principally with their bottom lines. In order to increase profits, firms will seek to exploit workers, to lower wages as far as possible. This exploitation will continue indefinitely, unless the state intervenes. The state does so by implementing a minimum wage. The lowest paid workers tend to be less educated, less skilled, and less organized than higher-paid employees. This makes them the easiest to manipulate and the easiest to replace. [1] In order to stop this outright exploitation of the most vulnerable members of society, the power of wage setting must fall to some extent within the purview of the state. Certainly, it is far better for state, which has citizens’ best interest at heart, to weigh in on the issue of setting wages than businesses, which tend not to care about their workers’ welfare or have competing interests. Furthermore, a minimum wage sends a social signal of valuation; it affirms that all people have worth, cannot be exploited, and are owed by dint of their humanity a certain level of treatment in the workforce, i.e. a minimum wage. This is important as a means to assist the self-empowerment of the poorest members of society, by encouraging them to value themselves. Also, the minimum wage aids in promoting social justice and equality by lowering wage disparities. [2] Citizens of more equal societies tend to have more in common and can share more in the construction of societal goals and aims. This form of social justice is certainly preferable to the class divisions propagated in the absence of a minimum wage, in which a part of society is relegated to permanent wage slavery.\n\n[1] Filion, EPI’s Minimum Wage Issue Guide, 2009\n\n[2] Waltman, The Politics of the Minimum Wage, 2000\n", "title": "" }, { "docid": "d8b9c66868a07ceed565ca59b9257a25", "text": "business economic policy economy general house believes national minimum wage The minimum wage encourages people to join the workforce rather than pursuing income through illegal channels\n\nWhen wages are extremely low the incentive to enter alternative markets is increased. This is particularly harmful in the case of illegal markets, such as those for drugs or prostitution. [1] When there is little to be gained from obtaining a legitimate job, no matter how plentiful they might be in the absence of a minimum wage, they would be undesirable by comparison to potentially highly lucrative black market opportunities. The minimum wage is essential for keeping the opportunity cost of entering the black market sufficiently high that people opt always to enter the mainstream, legal market. Furthermore, when the possibility of work in the legitimate market exists, even if work is harder to find due to a minimum wage, the very possibility of getting such a job will serve as a disincentive to pursuing illegal employment.\n\n[1] Kallem, Youth Crime and the Minimum Wage, 2004\n", "title": "" } ]
arguana
0c2ee1a916fe4cf23e8ca193d06f53ef
Source of trade Natural resources are a source of economic revenue for Africa. If managed well then this can become a genuine source of prosperity. Africa does not currently have developed secondary and tertiary sectors yet [1] , most of the continent’s economics surrounds primary sector activity such as resource extraction and farming. The high commodity price of items such as gold, diamonds and uranium is therefore valuable for Africa’s trade. Profits from this trade have allowed countries to strengthen their economic position by reducing debt and accumulating external reserves, a prime example of this being Nigeria. [1] Maritz,J. ‘Manufacturing: Can Africa become the next China?’ How We Made Africa 24 May 2011 http://www.howwemadeitinafrica.com/manufacturing-can-africa-become-the-next-china/9959/
[ { "docid": "c4216f7723aa80ae8214cd1d8a91e722", "text": "business economic policy international africa house believes africans are worse The trade of natural resources can be unreliable for African nations. Exports on the international market are subject to changes in price, which can harm export orientated countries should there be a decrease in value. The boom/bust cycle of oil has been particularly damaging. The drop of oil prices in the 1980s had a significant impact on African countries which were exporting the commodity [1] . The boom/bust cycle of resource value has impaired, rather than inhibited, some states’ debts. The price slump of copper in 2008 severely damaged Zambia’s mineral orientated economy, as FDI stopped and unemployment rose [2] . This debt crisis had been created by another slump in prices in the 1980s that forced the government to borrow to keep spending. [3] This demonstrates how international markets are unreliable as a sole source of income.\n\n[1] African Development Bank ‘African Development Report 2007’ pg.110 http://www.afdb.org/fileadmin/uploads/afdb/Documents/Publications/Maximizing%20the%20Benefits%20from%20-%20Oil%20and%20Gas%20in%20Africa.pdf\n\n[2] Bova,E. ‘Copper Boom and Bust in Zambia: The Commodity-Currency Link’ The Journal of Development Studies, 48:6, Pg.770 http://www.tandfonline.com/doi/pdf/10.1080/00220388.2011.649258\n\n[3] Liu, L. Larry, ‘The Zambian Economy and the IMF’, Academia.edu, December 2012, http://www.academia.edu/2351950/The_Zambian_Economy_and_the_IMF\n", "title": "" } ]
[ { "docid": "e3bef0903a722bbb4de76918363eb3e1", "text": "business economic policy international africa house believes africans are worse Employment practices are usually discriminatory against locals in Africa. Due to a lack of local technical expertise, firms often import professionals particularly for the highest paid jobs.\n\nThe presence of these extractive industries can also disrupt local economies, causing an overall decrease in employment by forcing the focus and funding away from other sectors [1] . Returning to the Nigerian example, the oil industry directly disrupted the agricultural industry, Nigeria’s biggest employment sector, causing increased job losses [2] .\n\n[1] Collins,C. ‘In the excitement of discovering oil, East Africa should not neglect agriculture’ The East African 9 March 2013 http://www.theeastafrican.co.ke/OpEd/comment/East-Africa-should-not-neglect-agriculture/-/434750/1715492/-/csn969/-/index.html\n\n[2] Adaramola,Z. ‘Nigeria: Naccima says oil sector is killing economy’ 13 February 2013 http://allafrica.com/stories/201302130929.html\n", "title": "" }, { "docid": "d3077646227b74c58d1adbc8b5e32f9f", "text": "business economic policy international africa house believes africans are worse Despite projects such as direct dividends, the gap between rich and poor is still worsened by natural resources. Investment from the profits of natural resources in human development is relatively low in Africa. In 2006, 29 of the 31 lowest scoring countries for HDI were in Africa, a symptom of low re-investment rates [1] . Generally it is only the economic elite who benefit from any resource extraction, and reinvestment rarely strays far from urban areas [2] . This increases regional and class inequality, ensuring poverty persists.\n\n[1] African Development Bank ‘African Development Report 2007’ pg.110 http://www.afdb.org/fileadmin/uploads/afdb/Documents/Publications/Maximizing%20the%20Benefits%20from%20-%20Oil%20and%20Gas%20in%20Africa.pdf\n\n[2] Ibid\n", "title": "" }, { "docid": "5c2ff4fa2fb3e972df8892d0fd427de8", "text": "business economic policy international africa house believes africans are worse Other countries are hypocritical in expecting Africa to develop in a sustainable way. Both the West and China substantially damaged their environments whilst developing. During Britain’s industrial revolution pollution led to poor air quality, resulting in the deaths of 700 people in one week of 1873 [1] . That said, sustainable resource management has become prominent in some African countries. Most countries in the South African Development Community (SADC) have laws which regulate the impact that mining has on the environment, ensuring accountability for extractive processes. In South Africa, there must be an assessment of possible environmental impacts before mining begins, then the company involved must announce how it plans to mitigate environmental damage [2] . In Namibia, there are conservation zones and communal forests where deforestation is restricted in order to prevent negative environmental consequences [3] .\n\n[1] Environmental History Resources ‘The Industrial Age’ date accessed 17/12/13 http://www.eh-resources.org/timeline/timeline_industrial.html\n\n[2] Southern Africa Research Watch ‘Land, biodiversity and extractive industries in Southern Africa’ 17 September 2013 http://www.osisa.org/book/export/html/5134\n\n[3] Hashange,H.’Namibia: Managing Natural Resources for Sustainable Development’ Namibia Economist 5 July 2013 http://allafrica.com/stories/201307051192.html\n", "title": "" }, { "docid": "8add8ff45403d043e3146447c1cad45f", "text": "business economic policy international africa house believes africans are worse Kleptocrats wish to increase their personal wealth and power, and will find a means to do so. To contribute power over resources as the main motive is inaccurate, as noted by Charles Kenny in Foreign Policy; ‘For every Gen. Sani Abacha skimming billions off Nigeria's oil wealth, there is a Field Marshal Idi Amin massacring Ugandans by the thousands without the aid or incentive of significant mineral resources’ [1] . There are many ways to increase power, if mineral wealth isn’t available then they’ll find another way.\n\n[1] Kenny,C. ‘Is it really true that underground riches lead to aboveground woes? No, not really.’ Foreign Policy 6 December 2010 http://www.foreignpolicy.com/articles/2010/12/06/what_resource_curse#sthash.oZVe6bJW.mQFB5WaO.dpbs\n", "title": "" }, { "docid": "ecae5522e7448cb5fb4104e0c7399479", "text": "business economic policy international africa house believes africans are worse Resources are not the problem, bad management and agreements are the problem here. The presence of Foreign Direct Investment (FDI) in resource extraction can have a more positive impact than if it was absent. The presence of FDI is often associated with increased bureaucracy efficiency and rule of law [1] . There have been attempts by Western governments to curtail illicit transactions as well. In 2013, the British government spearheaded the Extractive Industries Transparency Initiative aimed at encouraging accountability from TNCs [2] . Governments control the resources; they simply need to be more willing to fight, and prevent corruption, to get a better deal.\n\n[1] Bannerman,E. ‘Foreign Direct Investment and the Natural Resource Curse’ Munich Personal RePEc Archive 13 December 2007 http://mpra.ub.uni-muenchen.de/18254/1/FDINRCECONDEV.pdf\n\n[2] Duffield,A. ‘Botswana or Zimbabwe? Exploiting Africa’s Resources Responsibly; Africa Portal 12 December 2012 http://www.africaportal.org/blogs/community-practice/botswana-or-zimbabwe-exploiting-africa%E2%80%99s-resources-responsibly\n", "title": "" }, { "docid": "587570a22b08d4a58b1a55414f066c32", "text": "business economic policy international africa house believes africans are worse Resources don’t have to mean poor governance. In 2013, attempts were made to counter corruption, the G8 and EU have both began work on initiatives to increase the transparency of foreign firms extracting resources in Africa [1] . The Extractive Industries Transparency Initiative has been established in an attempt to improve governance on the continent by funding attempts to stem corruption in member countries. The results of this latter initiative has resulted in the recovery of ‘billions of US$’ in Nigeria [2] . Other projects are continuing in other African countries with great hope of success.\n\n[1] Oxfam ‘Moves to tackle Africa’s ‘resource curse’ reach turning point’ 23 October 2013 http://www.oxfam.org/en/pressroom/pressrelease/2013-10-24/moves-tackle-africas-resource-curse-reach-turning-point\n\n[2] EITI ‘Impact of EITI in Africa: Stories from the ground’ 2010 http://eiti.org/files/EITI%20Impact%20in%20Africa.pdf\n", "title": "" }, { "docid": "e7bf5d418a8c84596f559e79f54826da", "text": "business economic policy international africa house believes africans are worse Bring Africa out of poverty\n\nThe African continent has the highest rate of poverty in the world, with 40% of sub-Saharan Africans living below the poverty line. Natural resources are a means of increasing the quality of life and the standard of living as long as revenues are reinvested into the poorest areas of society. There are 35 countries in Africa which already conduct direct transfers of resource dividends to the poor through technology or in person [1] . In Malawi, £650,192.22 was given out in dividends to the poorest in society ensuring that they were given $14 a month in 2013 [2] . This ensures that there is a large base of citizens profiting from natural resources which increases their income and, in turn, their Human Development Index scores [3] .\n\n[1] Devarajan, S. ‘How Africa can extract big benefits for everyone from natural resources’ in The Guardian 29/06/13 http://www.theguardian.com/global-development/poverty-matters/2011/jun/29/africa-extracting-benefits-from-natural-resources\n\n[2] Dzuwa,J. ‘Malawi: Zomba Rolls out Scial Cash Transfer Programme’ Malawi News Agency 11 June 2013 http://allafrica.com/stories/201306120531.html\n\n[3] Ibid\n", "title": "" }, { "docid": "aa78b8b542908593661996fcd814616b", "text": "business economic policy international africa house believes africans are worse Natural resources create employment\n\nThe extraction of natural resources creates the possibility of job creation which can strengthen African economies. Both domestic and foreign firms require man power for their operations, and they will often draw from the local labour force. Employment ensures a better standard of living for the workers and injects money in to the home economy leading to greater regional economic stability. In Nigeria, for example, the company Shell hires 6000 employees and contractors, with 90% being Nigerian and at higher wages than the GDP per capita [1] . This would indicate that the presence of natural resources is economically strengthening Africa.\n\n[1] Shell Nigeria ‘Shell at a glance’ date accessed 16 December 2013 http://www.shell.com.ng/aboutshell/at-a-glance.html\n", "title": "" }, { "docid": "8ca4aaa90c7103881f9c6e4d477a4d61", "text": "business economic policy international africa house believes africans are worse Environmental Damage\n\nBoth licit and illicit resource extraction have caused ecological and environmental damage in Africa. The procurement of many natural resources requires processes such as mining and deforestation, which are harmful to the environment. Deforestation for access purposes, timber and cattle has led to around 3.4 million hectares of woodland being destroyed between 2000 and 2010 and, in turn, soil degradation [1] . As Africa’s rainforest are necessary for global ecological systems, this is a significant loss. Mining and transportation also create damage through pollution and the scarring of the landscape. Mining produces various harmful chemicals which contaminate water and soil, a process which is worsened by illicit groups who cut corners to ensure higher profits [2] .\n\n[1] Food and Agriculture Organization of the United States ‘World deforestation decreases, but remains in many countries’ http://www.fao.org/news/story/en/item/40893/\n\n[2] Kolver,L. ‘Illegal mining threat to lawful operations, safety and the environment’ Mining Weekly 16 August 2013 http://www.miningweekly.com/article/illegal-mining-in-south-africa-a-growing-problem-that-has-to-be-stopped-2013-08-16\n", "title": "" }, { "docid": "b6683e983e9ad980926a37fb2a916289", "text": "business economic policy international africa house believes africans are worse Foreign companies gain most of the profits\n\nThe majority of investment in Africa by Trans National Companies (TNCs) goes towards resource extraction [1] . Many companies use transfer pricing, tax avoidance and anonymous company ownership to increase profits at the expense of resource abundant nations [2] . Production sharing agreements, where companies and states share in the profit of a venture, can often benefit the former over the latter. In 2012 Ugandan activists sued the government for one such deal where the country was to likely to receive only half the profits rather than three quarters [3] .\n\nKofi Annan, former United Nations Security General, has claimed that Africa’s outflow of funds by TNCs in the extractive industries is twice as high as inflows to the continent. Businesses such as Barclays have been criticised for their promotion of tax havens in Africa [4] . These allow TNCs to avoid government taxation for projects such as resource extraction, a symptom of the attitude of foreign companies to investment in Africa. The unfavourable inflow/outflow balance prevents reinvestment in Africa’s infrastructure, education and health services.\n\n[1] African Development Bank ‘African Development Report 2007’ pg.110 http://www.afdb.org/fileadmin/uploads/afdb/Documents/Publications/Maximizing%20the%20Benefits%20from%20-%20Oil%20and%20Gas%20in%20Africa.pdf\n\n[2] Stewart,H. ‘Annan calls for end to ‘unconscionable’ exploitation of Africa’s resources’ The Guardian 10 May 2013 http://www.theguardian.com/business/2013/may/10/kofi-annan-exploit-africa-natural-resources\n\n[3] Akankwasa,S. ‘Uganda activists sue government over oil Production Sharing Agreements.’ International Bar Association 01/05/2012 http://www.ibanet.org/Article/Detail.aspx?ArticleUid=1f9c0159-6595-4449-9c3f-536572e4df70\n\n[4] Provost,C. ‘Row as Barclays promotes tax havens as ‘gateway for investment in Africa’ The Guardian 20 November 2013 http://www.theguardian.com/global-development/2013/nov/20/barclays-bank-tax-havens-africa-mauritius-offshore\n", "title": "" }, { "docid": "b742f102c8bb037eeb9ff11e2ab648ef", "text": "business economic policy international africa house believes africans are worse Resource abundance has led to poor governance\n\nCorruption in African governance is a common feature of African governance [1] , with resources being a major source of exploitation by the political class. Natural resources are often controlled by the government. As resources fund the government’s actions rather than tax, there is a decrease in accountability to the citizenry which enables the government to abuse its ownership of this land to make profit [2] . To benefit from resource wealth, money from the exploitation of mineral wealth and other sources needs to be reinvested in to the country’s economy and human capital [3] . Investing in infrastructure and education can encourage long term growth. However a large amount of funds are pocketed by politicians and bureaucrats instead, hindering growth [4] . Africa Progress Panel (APP) conducted a survey on five mining deals between 2010 and 2012 in the Democratic Republic of Congo (DRC). They found that the DRC was selling off state-owned mining companies at low prices. The new offshore owner would then resell the companies for much more, with much of the profit finding its way to DRC government officials [5] . The profits were twice as high as the combined budget for education and health, demonstrating that corruption caused by resource exploitation detracts from any long term growth.\n\n[1] Straziuso,J. ‘No African Leader wins $45m Good Governance Award’ Yahoo News 14 October 2013 http://news.yahoo.com/no-african-leader-wins-5m-good-governance-award-105638193.html\n\n[2] Hollingshead,A. ‘Why are extractive industries prone to corruption?’ Financial Transparency Coalition 19 September 2013 http://www.financialtransparency.org/2013/09/19/why-are-extractive-industries-prone-to-corruption-part-ii/\n\n[3] Pendergast,S.M., Kooten,G.C., &amp; Clarke,J.A. ‘Corruption and the Curse of Natural Resources’ Department of Economics University of Victoria, 2008 pg.5 http://economics.ca/2008/papers/0633.pdf\n\n[4] Ibid\n\n[5] Africa Progress Panel ‘Report: DRC mining deals highlight resource corruption’ 14 May 2013, http://www.africaprogresspanel.org/wp-content/uploads/2013/09/20130514_Report_DRC_mining_deals_highlight_resource_corruption_ENG1.pdf\n", "title": "" }, { "docid": "877e81ddb4208c17c0b29732a0b2f0b3", "text": "business economic policy international africa house believes africans are worse Resources are a source of conflict\n\nThere is a strong connection between the presence of natural resources and conflict within Africa. Natural resources, especially those with a high commodity price such as diamonds, are a useful means of funding rebellions and governments [1] . The 1991 civil war in Sierra Leone became infamous for the blood diamonds which came from mines with forced slavery. These diamonds were used to fund the Revolutionary United Front (RUF) for eleven years, extending the blood-shed. Continued conflict in the Congo is also attributed to the control of mineral wealth [2] and exemplifies how resources have negatively impacted Africa.\n\n[1] Pandergast, 2008, http://economics.ca/2008/papers/0633.pdf\n\n[2] Kharlamov,I. ‘Africa’s “Resource Wars” Assume Epidemic Proportions’ Global Research 24 November 2014 http://www.globalresearch.ca/africas-resource-wars-assume-epidemic-proportions/5312791\n", "title": "" } ]
arguana
d7f8dc36b559d119dc9d73569abc14a7
Many developing countries support entrepreneurship and gender equality In many developing countries, entrepreneurship is supported to create jobs and dynamic work conditions, and women are empowered and politically represented reducing any concerns of feeling as if they don’t belong. For example in Tunisia, many initiatives are being introduced to promote the entrepreneurship ecosystem including angel investing and attempts to reduce administrative barriers (9). Moreover, regarding gender equality, Tunisia’s Parliament has approved an amendment ensuring that women have greater representation in local politics. This amendment includes a proposal for gender parity in electoral law. (10)
[ { "docid": "21bdbc175a9f66897d1ae923a9b1ec24", "text": "employment international global society immigration minorities Making a start in encouraging entrepreneurship and gender identity is not likely to be enough to make a county attractive when compared against countries that are much further down the path. According to the Global Gender Gap Report 2016 Tunisia is still in the bottom quartile of the rankings on gender equality.(15)\n", "title": "" } ]
[ { "docid": "443d6d4ce43416e6aea93888a75f1ad9", "text": "employment international global society immigration minorities Most job vacancies in African countries ask for a university degree even if a degree is ultimately not the most important attribute for the job. (13) So the opportunities are there for those who would be considered to be intellectuals, it is everyone else for whom opportunities in their native land are lacking.\n", "title": "" }, { "docid": "daf4815370e09ad2c46523171caee6bd", "text": "employment international global society immigration minorities A strong national identity does not necessarily result in a strong sense of belonging. That national identity may have precluded other senses of belonging such as religion, or even close community ties and interactions.\n", "title": "" }, { "docid": "64aa77aa17441f5f9cd331fd3fbe6f5e", "text": "employment international global society immigration minorities Education is a crossover point; migrating for education may be about a sense of belonging but it is also an opportunity. A conservative culture that does not educate young women is not providing them with an opportunity that is available elsewhere.\n", "title": "" }, { "docid": "6ac250e58500f536ce6b70eb602eab55", "text": "employment international global society immigration minorities It seems hardly likely that feeling undervalued for their skills is a main reason for moving. When moving abroad many will instead encounter racism and concern about increasing numbers of migrants which would at least balance against being undervalued at home. They go instead because the ‘value’ of their skills is monetary – therefore about opportunities – not in terms of reputation and confidence or belonging.\n", "title": "" }, { "docid": "1583fd3ecaad82e7536bb9c0776a47e2", "text": "employment international global society immigration minorities If these young intellectuals really are politically conscious then they should desire to stay in their native country and change its system of government. It is the intellectuals who are needed to create, and then grow a democracy so that it represents the whole spectrum of opinion within the country and respects intellectual freedoms.\n", "title": "" }, { "docid": "c346bd8c8417e2de508059e80bb68ceb", "text": "employment international global society immigration minorities Intellectual migrants do not necessarily discard a traditional value to replace it with a corresponding western value. For example, they seldom renounce their religion in favor of a western one (3).\n\nA weaker sense of nationalism does not have to mean greater internationalism. Instead there may be greater ties to traditional culture, to a region or village. There may be fewer ties to nation, but throughout much of the developing world religion has a far greater adherence than in the west. Thus with a couple of exceptions (Communist states such as China and North Korea) it is more developed countries that are mostly non religious.(12)\n", "title": "" }, { "docid": "0921e01d55dfd6c684d4845b95dd9064", "text": "employment international global society immigration minorities If there is really no freedom then these migrants will be asylum seekers and refugees not true intellectual migrants by choice.\n\nEven if there is some alienation from their own native culture these migrants are still travelling to a much more alien culture. This being the case it seems unlikely that alienation is the main cause. Rather they are travelling to a culture that is more alien because they believe there are better opportunities there.\n", "title": "" }, { "docid": "fdc0f223e39a7aecaf5fdc512263aa13", "text": "employment international global society immigration minorities Many migrants come from countries with strong sense of belonging\n\nMany migrants come from countries with strong sense of belonging, national identities, and political consciousness. For instance, they are European migrants, and in 2016, they were 19.3 million residing in a different EU Member State from the one where they were born (7). With migration an issue even from countries with strong national identities it is clear that that identity is not the major driver of movement.\n", "title": "" }, { "docid": "ffd1df9e1f3b8aa38579a5a392ee2e08", "text": "employment international global society immigration minorities Developing countries have high unemployment rates and need to invest in job creation\n\nDeveloping countries invest in education and job creation because they have high unemployment rates (6). They need to address the lack of opportunities in order to improve their economy and reduce migration. This is as much the case for those at graduate level as for those who have less of an education. Africa’s 668 universities produce almost 10 million graduates a year, but only half find work.(14) It should therefore be no surprise that many migrate overseas for opportunities.\n", "title": "" }, { "docid": "7db79888d6b1ac0a26c0adac94b74e5f", "text": "employment international global society immigration minorities Intellectual women migrants outnumber intellectual men migrants\n\nThe need of belonging is greater for women than for men – Bardo and Bardo found that they miss home much more (5). On the other hand, unequal and discriminatory norms can be strong drivers of intellectual female migration (1). More young women than men now migrate for education and, in several European countries today, highly skilled migrant women outnumber highly skilled migrant men (1). Between 2000 and 2011, the number of tertiary-educated migrant women in OECD countries rose by 80%, which exceeded the 60% increase in the number of tertiary-educated migrant men. In Africa for example, the average emigration rates of tertiary-educated women are considerably higher than those of tertiary-educated men (27.7% for women and 17.1% for men).\n", "title": "" }, { "docid": "4a6227e3793ac6c69352683ef99dcc17", "text": "employment international global society immigration minorities The inferiority complex within older generations in the developing countries affects intellectuals’ sense of belonging while in their countries\n\nAn inferiority complex still exists among the older generations in the developing countries as regards the western technical know-how and organisation. A persisting attitude to place more confidence in the experts and specialists belonging to the developed countries than the educated nationals of the country (3) could foster a feeling of underestimation amongst intellectuals while in their countries, and becomes an additional driver of the continuous intellectual migration.\n", "title": "" }, { "docid": "da05cf90f18e2436072174f33d923a83", "text": "employment international global society immigration minorities Intellectual migrants are more impregnated by ideas of internationalism and universalism\n\nThe concept of nationalism as developed in Europe during the 19th century did not undergo the same evolution in the developing countries. Intellectuals do not identify themselves with their countries the way Europeans do. They are more impregnated by ideas of internationalism and universalism than the western nationalist – for example Mohsin Hamid argues our views of liberal values should be extended beyond nation states with their often unnatural borders. Thus, if they stay abroad after having adhered to the western way of life, they consider themselves part of the great human lot, value free movement as a basic human right, and do not necessarily suffer from complexes of disloyalty towards their home country (3).\n", "title": "" }, { "docid": "255a656359afb8c0d7f52fcfbca2bd5c", "text": "employment international global society immigration minorities Some intellectual migrants already feel a certain degree of alienation towards their national culture before leaving their country\n\nIntellectuals need stimulation, organisation, freedom, and recognition (3) that they usually struggle to find in their countries of origin. Some intellectuals from developing countries already feel a certain degree of alienation towards their national culture before leaving their own country (3). This may be a result of government policy; a lack of intellectual freedom, or because of a generally conservative culture. Thus, they experience a strong lack of intellectual belonging despite the arising economic opportunities resulting from their countries’ investments.\n\nFamily ties also play a strong role in aggravating or mitigating alienation. This is why it is the young, who don’t have dependents themselves, who are often the likeliest to migrate.\n", "title": "" }, { "docid": "9507ac868d83a8515fef53528ffc1227", "text": "employment international global society immigration minorities Most young intellectuals from developing countries are politically conscious and want to be \"actors\" in policy making\n\nYoung intellectuals from developing countries are to a very large extent politically conscious and active. They want to be \"actors\" and not \"spectators\" in policy making, all the more so when their specialism is impacted by government policy. Those who grow up in an autocratic, or not very democratic state are likely to want to go where they can use their voice. Even in many democracies intellectuals often largely liberal views both for government and teaching are not readily approved by the conservative regimes of their countries where usually the older generation is in power and constitutes a barrier against their progress.\n", "title": "" } ]
arguana
d789a7aac170f9c649541469c50a23d7
High salaries incentivize people to do difficult or unpleasant jobs Some jobs are extremely difficult or unpleasant. Consider a doctor, who trains for many years, often unpaid, in order to do their job – and the average doctor’s salary in the USA is close to the proposed cap, and surpasses it with merely 5 years experience1. Or consider a sewage worker or firefighter, whose job is one that many people would not want to do. High salaries are a good way of encouraging people to do these jobs; limiting the ability to pay high salaries will mean that some vital roles may be less appealing, and the job will not be done. 1 Payscale , “Salary for People with Jobs as Physicians/Doctors”, July 2011
[ { "docid": "db8da8492d5b4119ccac358cba57e260", "text": "business employment finance house would introduce mandatory salary capping There is still significant social prestige to being a doctor that will motivate people to take up the role; the same will be true for other high-paid jobs where there is a lot of training, such as lawyers. This prestige is often a key part of the reason people do the job in the first place; many doctors are paid far less than people working in business or financial services at similar levels of seniority. Finally, the unpleasant jobs mentioned typically are done for a salary well below the cap proposed, and they still have adequate people.\n", "title": "" } ]
[ { "docid": "700b689a9beb4a6a019bbcda5a67c61b", "text": "business employment finance house would introduce mandatory salary capping The effect of high salaries on levels of labor supply is likely to be marginal. People work in part due to the significant social pressure of having a job and advancing themselves comparatively against others. This motivation will still exist, as there will still be rewards to advancing your career; a salary closer to the salary cap, and the added responsibility and social (or business) standing such advancement provides. While there may be fewer people willing to work 18 hour days, 6 days a week, this work is being done because it is valuable – so the firm will need to employ more people to do it, and the work is spread over a larger number of people, possibly even increasing employment\n", "title": "" }, { "docid": "12592dc78cd696f2ad24a18aced5e0e9", "text": "business employment finance house would introduce mandatory salary capping Under this policy, companies will not be able to spend their profits on inflating their salaries, and so are more likely to have a long-term outlook to the company. The best way to advance long-term interests is through research; it is possible that all their excess profit will be spent on this. While entrepreneurs may be driven by profit, the salary proposed is sufficiently high that it can be aspired to; most entrepreneurs will still be motivated by it, as they seldom already have a job that already pays so much.\n", "title": "" }, { "docid": "1cb6a0d2ca3538d9952caaea819f0f8b", "text": "business employment finance house would introduce mandatory salary capping The significant difficulty of moving country, such as leaving behind friends and family, and leaving behind an area (or even language) you know well, are likely to limit emigration. As for immigration, the skill set is typically already within the country; if not, this policy may encourage a focus on an educational system to ensure it is. Finally, if the argumentation about equality leading to a better and happier society is correct, this in itself will attract immigrants to high-paying jobs.\n", "title": "" }, { "docid": "e78704e128378e0a51ab0031e022e891", "text": "business employment finance house would introduce mandatory salary capping This price-lowering effect is most likely to be felt in those industries where the majority of the costs are in wages; these industries are likely to be service based industries. Individuals, especially poorer individuals, rarely buy services, so the effect on the poorest is likely to be limited.\n", "title": "" }, { "docid": "b68aca89868769e0168293f8e5f4a05e", "text": "business employment finance house would introduce mandatory salary capping Social tensions are greatly exaggerated, and only actually felt when a specific crisis and against a very specific figurehead (in the case of Fred Goodwin, an entirely isolated example, the large amounts of media coverage he received for his role in the banking crisis). Furthermore, feelings of inferiority are typically reasoned away by people, who explain other's greater income in terms of their willingness to work hard, or being lucky. The feeling of superiority over others can be considered a motivator that encourages some people to work (See Opposition Argument One below). Finally, Sweden may be disanalogous as an example as they (and other Scandinavian countries) have a strong collectivist spirit that may be lacking in other countries.\n", "title": "" }, { "docid": "5c55661331fe5f0232a0083781c78600", "text": "business employment finance house would introduce mandatory salary capping It is likely that foreign demand will displace national demand for properties, especially in key city areas (such as New York or London). Furthermore, having a nice house is one of the strongest incentives to have a job and be a productive tax-paying member of society; loss of this incentive may decrease a society's output level and tax revenue.\n", "title": "" }, { "docid": "d365d5194dcee1a2fe8f320b45c6fa22", "text": "business employment finance house would introduce mandatory salary capping It is equally likely that money is a significant motivator in productivity, and that limiting wages will therefore harm productivity.\n", "title": "" }, { "docid": "7b07909098d3ba7e786033b636b5f2a4", "text": "business employment finance house would introduce mandatory salary capping Some evasion of this is inevitable; figures show $2 trillion of unreported income in the US in 20081. Furthermore, international cooperation is unlikely, as each country has a strong incentive to renege on agreements to attract more talented people to their country.\n\n1 E . Feige, \"America's Underground Economy: Measuring the Size, Growth and Determinants of Income Tax Evasion in the U.S\", January 2011\n", "title": "" }, { "docid": "cda13b9e0460cd6f83c24c2856e69384", "text": "business employment finance house would introduce mandatory salary capping This motion will lead to people leaving the country, and will limit the intake of skilled workers\n\nMany industries, especially at the highest paying end, rely on people of various nationalities. This is especially true in places seen to be financial centers of the world, such as New York, London and Tokyo – for example, 175,000 professional or managerial roles were given to immigrants in the UK in 20041. When a policy such as this is instigated, many people will leave to other countries that do not have such a limit, especially if they are initially from another country. Furthermore, it will be difficult for a country to attract talent while this policy is in effect, as the significant difficulty moving country involves, such as leaving friends and family behind, cannot be compensated for by a higher income.\n\n1 John Salt and Jane Millar, Office of National Statistics “Foreign Labour in the United Kingdom: current patterns and trends”, October 2006\n", "title": "" }, { "docid": "65bdfe4b0ff01490855c981b7bc3332f", "text": "business employment finance house would introduce mandatory salary capping High salaries incentivize people to take risks and undertake research\n\nMany entrepreneurs are driven by profit. This is the reason that people take out large loans from banks, often with their home as security, and use it to set up a business; the hope of profit and a better life. Without that incentive, the risk has a far lower reward, and therefore will appear to be not worth it. Entrepreneurs not only give others jobs, but stimulate the economy with new ideas and business practices that can spill over into other areas of the economy. Even within businesses that are already established, this policy will be problematic. For example, why would researchers at a pharmaceutical company try to develop a new drug if they realize they can't financially benefit from it? GlaxoSmithKline spent over $6bn dollars on research in 2010 alone1. This policy could limit such research into the type of technology (or medicine) that advances society.\n\n1 FierceBiotech , \"GlaxoSmithKline: The World's Biggest R+D Spenders\", March 2011\n", "title": "" }, { "docid": "6162e314525c04560dc8ed62cd13f338", "text": "business employment finance house would introduce mandatory salary capping High salaries incentivize people to work hard\n\nPeople respond to incentives, and one of the most direct incentives is a financial one. Higher salaries encourage people to deploy their labor. This benefits society by increasing tax revenues that can be spent on redistributive policies; for example, consider the much maligned investment banking profession. It is not uncommon for investment bankers to work 14 to 18 hour days, and to work at weekends; it is unlikely they would do this without the incentive of high salaries and bonuses, at least in the long run. The taxation on financial service providers (that rely on such hard work) and the workers themselves is significant; in 2010 in the UK, it was 11.2% of total tax receipts1. Furthermore, the deployment of labor may lead to more supporting workers being needed and therefore job creation.\n\n1 PWC , \"The Total Tax Contribution of UK Financial Services\", December 2010\n", "title": "" }, { "docid": "03fb5199c2f994c9d8ba13a348a01acb", "text": "business employment finance house would introduce mandatory salary capping This will distribute wealth more evenly\n\nAs a result of having to pay important directors and employees a lower wage, businesses will be able to produce their goods and services for a lower cost, and sell therefore sell them for a lower price. This will lead to a more equitable distribution of wealth, as the poorest will become relatively richer, as prices will fall. This will also be true for small businesses, which will be able to obtain cheaper legal and financial advice and business consultancy, and are therefore more likely to succeed. Sports provide a good example of this. In major league baseball salaries for the players more than doubled in real terms between 1992 and 2002 while ticket prices rose 50%. As players wages take more than 50% of teams revenues a cap would mean a significant cut in costs that could be passed on to the consumer.1\n\n1 Michael J. Haupert, \"The Economic History of Major League Baseball\", EH.net Encyclopedia, December 3rd 2007\n", "title": "" }, { "docid": "ed3083780f1b7d7b5d3f9544d6487ffa", "text": "business employment finance house would introduce mandatory salary capping Equality is in and of itself a good thing\n\nFirstly, it limits social tension that may arise due to public dissatisfaction with high wages; see the attacks on the famous banker Sir Fred Goodwin in the UK1. Secondly, people may feel that society recognizes them as being more equal, increasing the perceived self-worth of many, avoiding feelings of inferiority and worry about their social worth, and making them feel closer to other people. See, for example, Sweden, which has the lowest Gini Coefficient (indicating low levels of inequality) in the world, and also some of the highest levels of GDP per capita, life expectancy and literacy rates, and low levels of crime and obesity2. Furthermore, a Forbes report suggests Sweden is one of the happiest countries in the world (along with Denmark, Finland and Norway, 3 other countries with a low Gini Coefficient)3.\n\n1 BBC News Website, 25th March 2009 2 CIA World Factbook, 20th July 2011 3 Forbes , \"The World's Happiest Countries\", July 14th 2010\n", "title": "" }, { "docid": "008a4165a066aef29afa4ca96af1ed94", "text": "business employment finance house would introduce mandatory salary capping This will enable people to better choose their jobs\n\nWhen wages are better standardized across professions, people are less likely to feel socially pressured into seeking out a higher paid job. As such, they are more likely to choose their job on the basis of other factors, such as how much they enjoy the job, or how ethical the working practices of a company are. This will lead to happier, and hence more productive, employees.\n", "title": "" }, { "docid": "f2a315392212071bc34ec12fd0bfce83", "text": "business employment finance house would introduce mandatory salary capping Systems for implementation\n\nThis system would be best implemented by imposing a mandatory 100% tax on all personal income over $150,000, and all bonuses over $30,000. This means that some revenue could still be raised from this if people did continue to pay large salaries and bonuses, although they are unlikely to do so. Furthermore, it would be best implemented through international cooperation, to limit the opportunity of one country to be able to offer higher salaries and poach talented individuals. Countries may agree to this as it prevents a 'race to the top' in salaries, where companies have to offer more and more money to attract the best people.\n", "title": "" }, { "docid": "d3b7171981803a5d2ef2dbc612784f13", "text": "business employment finance house would introduce mandatory salary capping This will limit the control of the rich over key scarce resources\n\nSome resources –most notably housing – are very important to large numbers of people, and owning them gives people a great deal of happiness. This policy will limit richer people owning several properties while others live in rented accommodation or smaller houses, as price competition for such properties will be less intense, and poorer people will be better able to compete through savings. Estimates in 2005 suggested there were 6.8million second homes in the USA1.This is a good thing, as it is likely that a person (or family) values their first property more than another person values their second property, known as the law of diminishing marginal returns. This is perhaps the best example of the ways in which inequality leads to worse outcomes for society.\n\n1 E . Belsky, “Multiple-Home Ownership and the Income Elasticity of Housing Demand”, October 2006\n", "title": "" } ]
arguana
cf65b2eb6ab27ad2753fda210c7f9c74
Collective Bargaining is Especially Necessary in the Case of Natural Monopolies Many public industries exist as public industries because they are natural monopolies. For example, rail travel, which is often public in Western Liberal democracies, is a sector in which it makes no sense to build multiple railway lines across the country, each for a different company, when one would simply be more efficient. A similar case can be made for things such as public utilities. As such, these sectors often only have a single, often public company working in that sector. In the case where there is a monopolist, the workers in the sector often have no other employers that they can reasonably find that require their skills, so for example, teachers are very well qualified to teach, however, are possibly not as qualified to deal with other areas and as such will find difficulty moving to another profession. As such, the monopolist in this area has the power to set wages without losing a significant number of employees. Further, in many of these industries strike action will not be used, for example because teachers have a vocational, almost fiduciary relationship with their students and don’t wish to see them lose out due to a strike. [1] [1] “Monopoly Power.” http://en.wikipedia.org/wiki/Monopoly
[ { "docid": "f4010c5213ff6c7714a35d2cdd6f7dc4", "text": "economic policy employment house would abolish collective bargaining rights The opposition argument here is simply a case against natural monopolies. In many Western Liberal democracies, advances in technology have enabled natural monopolies on telecoms and public transport to be broken down. A wide range of necessary public services- such as telecoms and power generation- now function as part of a competitive market. As such, it is feasible that the state could simply deal with this problem by breaking down other natural monopolies in the same way.\n\nEven if the state acts as a monopolist in some industries, public sector workers often have transferrable skills which mean they can move to other industries without that much trouble. For example, a public prosecutor will have acquired professional skills that enable a relatively quick transition into private or commercial civil practice. [1]\n\n[1] “Identifying the Transferable skills of a Teacher.” North Central College. http://northcentr\n", "title": "" } ]
[ { "docid": "f40f4043d78ef96aa41ba652acdb77a4", "text": "economic policy employment house would abolish collective bargaining rights Even if collective bargaining leads to a workforce that is better able to communicate their ideas, it also leads to a situation as mentioned within the proposition arguments that results in unions having significantly more power over their wages and the government than in other situations. This is problematic because it leads to consequences where other unions feel that they should have the same powers as public unions and can hence lead to volatility in the private sector as a result.\n\nFurther, given that often the negotiators that work for public unions are often aware of the political power of the public workers, negotiations with public unions often lead to strike action due to the fact that it is likely that the public will be sympathetic to the public workers. As such, allowing public workers to bargain collectively leads to situations that are often much worse for the public.\n\nFurther, a lot of opposition’s problems with a lack of collective bargaining can simply be dealt with through implementing a more sensitive and understanding feedback process among workers. If a worker for example raises an issue which might affect a large number of workers, it should be fairly simple for public companies to take polls of workers to understand the gravity of the problem. [1]\n\n[1] Rabin, Jack, and Dodd, Don, “State and Local Government Administration”, New York: Marcel Dekker Inc 1985, p390\n", "title": "" }, { "docid": "4f5a88f9456d798541c394e2ceb2a7f9", "text": "economic policy employment house would abolish collective bargaining rights As discussed in the first proposition side argument, we can curtail the rights of individuals if we see that those rights lead to a large negative consequence for the state. In this situation proposition is happy to let some public sector workers feel slightly disenfranchised if it leads to fewer strikes and a situation where public sector workers are not paid too much, then the net benefit to society is such that the slight loss in terms of consistency of rights is worth taking instead. [1]\n\n[1] Davey, Monica, “Wisconsin Senate Limits Bargaining by Public Workers”, The New York Times, 9 March 2011, http://www.nytimes.com/2011/03/10/us/10wisconsin.html?pagewanted=all\n", "title": "" }, { "docid": "b0fab1801e383db4e2f328da8be15d51", "text": "economic policy employment house would abolish collective bargaining rights The opposition argument here is simply a case against natural monopolies. In many Western Liberal democracies, advances in technology have enabled natural monopolies on telecoms and public transport to be broken down. A wide range of necessary public services- such as telecoms and power generation- now function as part of a competitive market. As such, it is feasible that the state could simply deal with this problem by breaking down other natural monopolies in the same way.\n\nEven if the state acts as a monopolist in some industries, public sector workers often have transferrable skills which mean they can move to other industries without that much trouble. For example, a public prosecutor will have acquired professional skills that enable a relatively quick transition into private or commercial civil practice. [1]\n\n[1] “Identifying the Transferable skills of a Teacher.” North Central College. http://northcentralcollege.edu/documents/student_life/Teacher%20Skills_Skills%20Assessment.pdf\n", "title": "" }, { "docid": "3c855f7935df3a215e25ba1f3e32a3ad", "text": "economic policy employment house would abolish collective bargaining rights The public sector being paid extra is something that is acceptable and necessary within society. Workers within the public sector often fulfill roles in jobs that are public goods. Such jobs provide a positive externality for the rest of society, but would be underprovided by the free market. For example, education would likely be underprovided, particularly for the poorest, by the free market but provides a significant benefit to the public because of the long term benefits an educated populace provides. [I1] In healthcare the example of the United States shows that private providers will never provide to those who are unable to afford it with nearly 50million people without health insurance. [1]\n\nAlthough the average pay received by government employees tends to be higher, the peak earnings potential of a government position is significantly lower than that of other professions. Workers who chose to build long term careers within the public sector forgo a significant amount of money, and assume a heavier workload, in order to serve the needs of society and play a part in furthering its aspirations. As such, and owing to the fact that the people who do these jobs often provide economic benefit beyond what their pay would encompass in the private sector, it makes sense that they be paid more in the public sector. This is because their work benefits the people of the state and as such the state as a whole benefits significantly more from their work. [2]\n\n[1] Christie, Les, “Number of people without health insurance climbs”, CNNmoney, 13 September 2011, http://money.cnn.com/2011/09/13/news/economy/census_bureau_health_insura...\n\n[2] “AS Market Failure.” Tutor2u. http://tutor2u.net/economics/revision-notes/as-marketfailure-positive-externalities.html\n", "title": "" }, { "docid": "ebea3280c96f1e85d53dd1656f9da0af", "text": "economic policy employment house would abolish collective bargaining rights Collective bargaining might hurt the democratic process due to its political nature, but the alternative is worse. Without collective bargaining it is incredibly difficult for public sector workers to get across their ideas of what their pay should be to their employers. This leads to worse consequences because public sector workers who feel underpaid or overworked will often move to the private sector for better job opportunities in the future as well as a better collective bargaining position. Further, those public sector workers that do stay will be unhappy in their positions and will likely do a worse job at work.\n\nGiven that this is true and the fact that public sector workers often choose to do their jobs out of a sense of duty or love for the profession, it is fair that the taxpayers should be placed in a position where they are required to trust the public sector and the politicians to work out deals that end up being in favour of the entire state, not just a small minority. [1]\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "ebea3280c96f1e85d53dd1656f9da0af", "text": "economic policy employment house would abolish collective bargaining rights Collective bargaining might hurt the democratic process due to its political nature, but the alternative is worse. Without collective bargaining it is incredibly difficult for public sector workers to get across their ideas of what their pay should be to their employers. This leads to worse consequences because public sector workers who feel underpaid or overworked will often move to the private sector for better job opportunities in the future as well as a better collective bargaining position. Further, those public sector workers that do stay will be unhappy in their positions and will likely do a worse job at work.\n\nGiven that this is true and the fact that public sector workers often choose to do their jobs out of a sense of duty or love for the profession, it is fair that the taxpayers should be placed in a position where they are required to trust the public sector and the politicians to work out deals that end up being in favour of the entire state, not just a small minority. [1]\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "7141405d8caa2983f1b20477866b5935", "text": "economic policy employment house would abolish collective bargaining rights Collective bargaining is considered a right because of the great benefit that it provides. Specifically, whilst freedom of association might not allow people to be privy to the negotiation process, when a large enough group of people form together and make a statement regarding their opinion, it is profitable for those in power to listen to them.\n\nCollective bargaining in this situation is a logical extension of that. Given that public sector workers are intrinsic to the continued success of the state, it thus makes sense that the state gives them a platform to make their views in a clear and ordered fashion, such that the state can take them into account easily. [1]\n\nFurther, the knowledge that such a right exists causes unions to act in a way which is more predictable. Specifically, a right to unionise with reduce the likelihood that state employees will engage in strike action. Under existing union law, groups of employees are able to compel a state employer to hear their demands, and to engage in negotiations. Indeed, they may be obliged to do so before they commence strike action. If the resolution were to pass, associations of state employees would be compelled to use strikes as a method of initiating negotiation. Under the status quo, strikes are used as a tactic of last resort against an intractable opponent or as a demonstration of the support that a union official’s bargaining position commands amongst the Union’s rank-and-file members.\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "b5d3e7ad766595ee81c760c9db30e73c", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is a Right.\n\nCollective bargaining is a right. If the state allows freedom of association, individuals will gather together and exchange their ideas and views as a natural consequence of this freedom. Further, free association and free expression allows groups to then select a representative to express their ideas in a way that the individuals in the group might not be able to. In preventing people from using this part of their right to assembly, we weaken the entire concept of the right to assembly. The point of the right to assembly is to allow the best possible representation for individuals. When a group of individuals are prevented from enjoying this right then it leads to those individuals feeling isolated from the rest of society who are able to enjoy this right.\n\nThis is particularly problematic in the case of public sector workers as the state that is isolating them also happens to be their employer. This hurts the way that people in the public sector view the state that ideally is meant to represent them above all as they actively contribute to the well being of the state. [1]\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "dd82b14886501a5b7c0d022f906fdb40", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Needed to Voice Opinion\n\nCollective bargaining is needed by people in any job. Within any firm there exist feedback structures that enable workers to communicate with managers and executive decision makers. However, there are some issues which affect workers significantly, but run against the principles of profit, or in this case the overall public good that the state seeks to serve.\n\nIn this situation, a collection of workers are required. This is primarily because if suggested changes go against public interest then a single worker requesting such a change is likely to be rejected. However, it is the indirect benefit to public interest through a workforce that is treated better that must also be considered. But indirect benefit can only truly occur if there are a large number of workers where said indirect benefit can accrue.\n\nSpecifically, indirect benefit includes the happiness of the workforce and thus the creation of a harder working workforce, as well as the prevention of brain drain of the workforce to other professions. When a single person is unhappy for example, the effect is minimal, however if this effect can be proved for a large number of people then an adjustment must be made.\n\nIn order for these ideas to be expressed, workers can either engage in a collective bargaining process with their employer, or take more drastic action such as strikes or protests to raise awareness of the problem. Given that the alternate option is vastly more disruptive, it seems prudent to allow people to do collectively bargain. [1]\n\n[1] “Importance of Collective Bargaining.” Industrial relations. http://industrialrelations.naukrihub.com/importance-of-collective-bargaining.html\n", "title": "" }, { "docid": "cebbf40416ddc63d3b8ea5eab7910e91", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Especially Necessary in the Case of Natural Monopolies\n\nMany public industries exist as public industries because they are natural monopolies. For example, rail travel, which is often public in Western Liberal democracies, is a sector in which it makes no sense to build multiple railway lines across the country, each for a different company, when one would simply be more efficient. A similar case can be made for things such as public utilities. As such, these sectors often only have a single, often public company working in that sector.\n\nIn the case where there is a monopolist, the workers in the sector often have no other employers that they can reasonably find that require their skills, so for example, teachers are very well qualified to teach, however, are possibly not as qualified to deal with other areas and as such will find difficulty moving to another profession. As such, the monopolist in this area has the power to set wages without losing a significant number of employees. Further, in many of these industries strike action will not be used, for example because teachers have a vocational, almost fiduciary relationship with their students and don’t wish to see them lose out due to a strike. [1]\n\n[1] “Monopoly Power.” http://en.wikipedia.org/wiki/Monopoly\n", "title": "" }, { "docid": "c244d194e073d033ae332bcde7fdbe92", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining Leads to an Overpaid Public Sector\n\nThe public sector is often significantly overpaid. The workers within the public sectors of Western liberal democracies often get paid more than people of equal education and experience who are employed in the private sector. In the United States there is a salary premium of 10-20 percent in the public sector. This means that there is likely a waste of resources as these people are being paid more than they should be by the government. [1]\n\nThe reason this happens is that collective bargaining means that workers can often, through the simple idea that they can communicate with the government and have a hand in the decision making process, make their demands much more easily.\n\nFurther, governments in particular are vulnerable during negotiations with unions, due their need to maintain both their political credibility and the cost effectiveness of the services they provide. This is significantly different to private enterprise where public opinion of the company is often significantly less relevant. As such, public sector workers can earn significantly more than their equally skilled counterparts in the private sector. This is problematic because it leads to a drain of workers and ideas from the private sector to the public. This is, in and of itself, problematic because the public sector, due to being shackled to the needs of public opinion often take fewer risks than the private sector and as such results in fewer innovations than work in the private sector.\n\n[1] Biggs, Andrew G. “Why Wisconsin Gov. Scott Walker Is Right About Collective Bargaining.” US News. 25/02/2011 http://www.usnews.com/opinion/articles/2011/02/25/why-wisconsin-gov-scott-walker-is-right-about-collective-bargaining\n", "title": "" }, { "docid": "7bfed64886b051b81467b1e0a5435ea5", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining Hurts the Democratic Process\n\nThe bargain between normal unions and private enterprise involves all parties being brought to the table and talking about the issues that they might have. However, the public sector represents the benefits of taxpayers, the politicians and the unions. The power that unions exercises means that negotiations can happen without the consent or involvement of the public sector’s stakeholders, the public.\n\nEven though power in a democracy is usually devolved to the politicians for this purpose, given the highly politicised nature of union negotiations, government office-holders who supervise union negotiations may act inconsistently with the mandate that the electorate have given them. This is because public unions often command a very large block of voters and can threaten politicians with this block of voters readily. This is not the same as a private business where officials aren’t elected by their workers. As such, collective bargaining rights for public union undermine the ability of taxpayers to dictate where their money is being spent significantly. [1]\n\n[1] “Union Bargaining Just A Dream For Many Gov Workers.” Oregan Herald. 27/02/2011 http://www.oregonherald.com/news/show-story.cfm?id=234947\n", "title": "" }, { "docid": "9903acf8626cfd6c60958908b5c8d075", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Not a Right\n\nWhilst the freedom of association exists under the state and it is true that people should be allowed to communicate with one another and form groups to forward their personal and political interests, it is not true that the freedom of association automatically grants access to the decision making process.\n\nUnions in this instance are problematic because whilst other groups do not have access to special privileges, unions are able to exert a significant and disproportionate amount of influence over the political process through the use of collective bargaining mechanisms. This argument applies to private unions as well, although to a lesser extent, and the banning of collective bargaining for private unions would be principally sound. In the case of unions in the private sector they can cause large amounts of disruption which has a large knock on impact on the economy giving leverage over politicians for whom the economy and jobs are always important issues. For example unions in transport in the private sector are just as disruptive as in the public sector. Even more minor businesses can be significant due to being in supply or logistics chains that are vital for important parts of the economy. [1] The access to the decision making process that unions are granted goes above and beyond the rights that we award to all other groups and as such this right, if it can be called one at all, can easily be taken away as it is the removal of an inequality within our system.\n\nFurther, even if collective bargaining were to be considered a “right,” the government can curtail the rights of individuals and groups of people should it feel the harm to all of society is great enough. We see this with the limits that we put on free speech such that we may prevent the incitement of racial hatred. [2]\n\n[1] Shepardson, David, “GM, Ford warn rail strike could cripple auto industry”, The Detroit News, 30 November 2011, http://www.detnews.com/article/20111130/AUTO01/111300437/GM-Ford-warn-rail-strike-could-cripple-auto-industry\n\n[2] Denholm, David “Guess What: There is no ‘right’ to collective bargaining.” LabourUnionReport.com 21/02/2011 http://www.laborunionreport.com/portal/2011/02/guess-what-there-is-no-right-to-collective-bargaining/\n", "title": "" }, { "docid": "0ad64f72a930d01cb938551f32c554d8", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Needed to Voice Opinion\n\nThe bargain between normal unions and private enterprise involves all parties being brought to the table and talking about the issues that they might have. However, the public sector represents the benefits of taxpayers, the politicians and the unions. The power that unions exercises means that negotiations can happen without the consent or involvement of the public sector’s stakeholders, the public.\n\nEven though power in a democracy is usually devolved to the politicians for this purpose, given the highly politicised nature of union negotiations, government office-holders who supervise union negotiations may act inconsistently with the mandate that the electorate have given them. This is because public unions often command a very large block of voters and can threaten politicians with this block of voters readily. This is not the same as a private business where officials aren’t elected by their workers. As such, collective bargaining rights for public union undermine the ability of taxpayers to dictate where their money is being spent significantly. [1]\n\n[1] “Union Bargaining Just A Dream For Many Gov Workers.” Oregan Herald. 27/02/2011 http://www.oregonherald.com/news/show-story.cfm?id=234947\n", "title": "" } ]
arguana
cf65b2eb6ab27ad2753fda210c7f9c74
Collective Bargaining is Especially Necessary in the Case of Natural Monopolies Many public industries exist as public industries because they are natural monopolies. For example, rail travel, which is often public in Western Liberal democracies, is a sector in which it makes no sense to build multiple railway lines across the country, each for a different company, when one would simply be more efficient. A similar case can be made for things such as public utilities. As such, these sectors often only have a single, often public company working in that sector. In the case where there is a monopolist, the workers in the sector often have no other employers that they can reasonably find that require their skills, so for example, teachers are very well qualified to teach, however, are possibly not as qualified to deal with other areas and as such will find difficulty moving to another profession. As such, the monopolist in this area has the power to set wages without losing a significant number of employees. Further, in many of these industries strike action will not be used, for example because teachers have a vocational, almost fiduciary relationship with their students and don’t wish to see them lose out due to a strike. [1] [1] “Monopoly Power.” http://en.wikipedia.org/wiki/Monopoly
[ { "docid": "b0fab1801e383db4e2f328da8be15d51", "text": "economic policy employment house would abolish collective bargaining rights The opposition argument here is simply a case against natural monopolies. In many Western Liberal democracies, advances in technology have enabled natural monopolies on telecoms and public transport to be broken down. A wide range of necessary public services- such as telecoms and power generation- now function as part of a competitive market. As such, it is feasible that the state could simply deal with this problem by breaking down other natural monopolies in the same way.\n\nEven if the state acts as a monopolist in some industries, public sector workers often have transferrable skills which mean they can move to other industries without that much trouble. For example, a public prosecutor will have acquired professional skills that enable a relatively quick transition into private or commercial civil practice. [1]\n\n[1] “Identifying the Transferable skills of a Teacher.” North Central College. http://northcentralcollege.edu/documents/student_life/Teacher%20Skills_Skills%20Assessment.pdf\n", "title": "" } ]
[ { "docid": "f40f4043d78ef96aa41ba652acdb77a4", "text": "economic policy employment house would abolish collective bargaining rights Even if collective bargaining leads to a workforce that is better able to communicate their ideas, it also leads to a situation as mentioned within the proposition arguments that results in unions having significantly more power over their wages and the government than in other situations. This is problematic because it leads to consequences where other unions feel that they should have the same powers as public unions and can hence lead to volatility in the private sector as a result.\n\nFurther, given that often the negotiators that work for public unions are often aware of the political power of the public workers, negotiations with public unions often lead to strike action due to the fact that it is likely that the public will be sympathetic to the public workers. As such, allowing public workers to bargain collectively leads to situations that are often much worse for the public.\n\nFurther, a lot of opposition’s problems with a lack of collective bargaining can simply be dealt with through implementing a more sensitive and understanding feedback process among workers. If a worker for example raises an issue which might affect a large number of workers, it should be fairly simple for public companies to take polls of workers to understand the gravity of the problem. [1]\n\n[1] Rabin, Jack, and Dodd, Don, “State and Local Government Administration”, New York: Marcel Dekker Inc 1985, p390\n", "title": "" }, { "docid": "4f5a88f9456d798541c394e2ceb2a7f9", "text": "economic policy employment house would abolish collective bargaining rights As discussed in the first proposition side argument, we can curtail the rights of individuals if we see that those rights lead to a large negative consequence for the state. In this situation proposition is happy to let some public sector workers feel slightly disenfranchised if it leads to fewer strikes and a situation where public sector workers are not paid too much, then the net benefit to society is such that the slight loss in terms of consistency of rights is worth taking instead. [1]\n\n[1] Davey, Monica, “Wisconsin Senate Limits Bargaining by Public Workers”, The New York Times, 9 March 2011, http://www.nytimes.com/2011/03/10/us/10wisconsin.html?pagewanted=all\n", "title": "" }, { "docid": "f4010c5213ff6c7714a35d2cdd6f7dc4", "text": "economic policy employment house would abolish collective bargaining rights The opposition argument here is simply a case against natural monopolies. In many Western Liberal democracies, advances in technology have enabled natural monopolies on telecoms and public transport to be broken down. A wide range of necessary public services- such as telecoms and power generation- now function as part of a competitive market. As such, it is feasible that the state could simply deal with this problem by breaking down other natural monopolies in the same way.\n\nEven if the state acts as a monopolist in some industries, public sector workers often have transferrable skills which mean they can move to other industries without that much trouble. For example, a public prosecutor will have acquired professional skills that enable a relatively quick transition into private or commercial civil practice. [1]\n\n[1] “Identifying the Transferable skills of a Teacher.” North Central College. http://northcentr\n", "title": "" }, { "docid": "3c855f7935df3a215e25ba1f3e32a3ad", "text": "economic policy employment house would abolish collective bargaining rights The public sector being paid extra is something that is acceptable and necessary within society. Workers within the public sector often fulfill roles in jobs that are public goods. Such jobs provide a positive externality for the rest of society, but would be underprovided by the free market. For example, education would likely be underprovided, particularly for the poorest, by the free market but provides a significant benefit to the public because of the long term benefits an educated populace provides. [I1] In healthcare the example of the United States shows that private providers will never provide to those who are unable to afford it with nearly 50million people without health insurance. [1]\n\nAlthough the average pay received by government employees tends to be higher, the peak earnings potential of a government position is significantly lower than that of other professions. Workers who chose to build long term careers within the public sector forgo a significant amount of money, and assume a heavier workload, in order to serve the needs of society and play a part in furthering its aspirations. As such, and owing to the fact that the people who do these jobs often provide economic benefit beyond what their pay would encompass in the private sector, it makes sense that they be paid more in the public sector. This is because their work benefits the people of the state and as such the state as a whole benefits significantly more from their work. [2]\n\n[1] Christie, Les, “Number of people without health insurance climbs”, CNNmoney, 13 September 2011, http://money.cnn.com/2011/09/13/news/economy/census_bureau_health_insura...\n\n[2] “AS Market Failure.” Tutor2u. http://tutor2u.net/economics/revision-notes/as-marketfailure-positive-externalities.html\n", "title": "" }, { "docid": "ebea3280c96f1e85d53dd1656f9da0af", "text": "economic policy employment house would abolish collective bargaining rights Collective bargaining might hurt the democratic process due to its political nature, but the alternative is worse. Without collective bargaining it is incredibly difficult for public sector workers to get across their ideas of what their pay should be to their employers. This leads to worse consequences because public sector workers who feel underpaid or overworked will often move to the private sector for better job opportunities in the future as well as a better collective bargaining position. Further, those public sector workers that do stay will be unhappy in their positions and will likely do a worse job at work.\n\nGiven that this is true and the fact that public sector workers often choose to do their jobs out of a sense of duty or love for the profession, it is fair that the taxpayers should be placed in a position where they are required to trust the public sector and the politicians to work out deals that end up being in favour of the entire state, not just a small minority. [1]\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "ebea3280c96f1e85d53dd1656f9da0af", "text": "economic policy employment house would abolish collective bargaining rights Collective bargaining might hurt the democratic process due to its political nature, but the alternative is worse. Without collective bargaining it is incredibly difficult for public sector workers to get across their ideas of what their pay should be to their employers. This leads to worse consequences because public sector workers who feel underpaid or overworked will often move to the private sector for better job opportunities in the future as well as a better collective bargaining position. Further, those public sector workers that do stay will be unhappy in their positions and will likely do a worse job at work.\n\nGiven that this is true and the fact that public sector workers often choose to do their jobs out of a sense of duty or love for the profession, it is fair that the taxpayers should be placed in a position where they are required to trust the public sector and the politicians to work out deals that end up being in favour of the entire state, not just a small minority. [1]\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "7141405d8caa2983f1b20477866b5935", "text": "economic policy employment house would abolish collective bargaining rights Collective bargaining is considered a right because of the great benefit that it provides. Specifically, whilst freedom of association might not allow people to be privy to the negotiation process, when a large enough group of people form together and make a statement regarding their opinion, it is profitable for those in power to listen to them.\n\nCollective bargaining in this situation is a logical extension of that. Given that public sector workers are intrinsic to the continued success of the state, it thus makes sense that the state gives them a platform to make their views in a clear and ordered fashion, such that the state can take them into account easily. [1]\n\nFurther, the knowledge that such a right exists causes unions to act in a way which is more predictable. Specifically, a right to unionise with reduce the likelihood that state employees will engage in strike action. Under existing union law, groups of employees are able to compel a state employer to hear their demands, and to engage in negotiations. Indeed, they may be obliged to do so before they commence strike action. If the resolution were to pass, associations of state employees would be compelled to use strikes as a method of initiating negotiation. Under the status quo, strikes are used as a tactic of last resort against an intractable opponent or as a demonstration of the support that a union official’s bargaining position commands amongst the Union’s rank-and-file members.\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "cebbf40416ddc63d3b8ea5eab7910e91", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Especially Necessary in the Case of Natural Monopolies\n\nMany public industries exist as public industries because they are natural monopolies. For example, rail travel, which is often public in Western Liberal democracies, is a sector in which it makes no sense to build multiple railway lines across the country, each for a different company, when one would simply be more efficient. A similar case can be made for things such as public utilities. As such, these sectors often only have a single, often public company working in that sector.\n\nIn the case where there is a monopolist, the workers in the sector often have no other employers that they can reasonably find that require their skills, so for example, teachers are very well qualified to teach, however, are possibly not as qualified to deal with other areas and as such will find difficulty moving to another profession. As such, the monopolist in this area has the power to set wages without losing a significant number of employees. Further, in many of these industries strike action will not be used, for example because teachers have a vocational, almost fiduciary relationship with their students and don’t wish to see them lose out due to a strike. [1]\n\n[1] “Monopoly Power.” http://en.wikipedia.org/wiki/Monopoly\n", "title": "" }, { "docid": "b5d3e7ad766595ee81c760c9db30e73c", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is a Right.\n\nCollective bargaining is a right. If the state allows freedom of association, individuals will gather together and exchange their ideas and views as a natural consequence of this freedom. Further, free association and free expression allows groups to then select a representative to express their ideas in a way that the individuals in the group might not be able to. In preventing people from using this part of their right to assembly, we weaken the entire concept of the right to assembly. The point of the right to assembly is to allow the best possible representation for individuals. When a group of individuals are prevented from enjoying this right then it leads to those individuals feeling isolated from the rest of society who are able to enjoy this right.\n\nThis is particularly problematic in the case of public sector workers as the state that is isolating them also happens to be their employer. This hurts the way that people in the public sector view the state that ideally is meant to represent them above all as they actively contribute to the well being of the state. [1]\n\n[1] Bloomberg, Michael. “Limit Pay, Not Unions.” New York Times. 27/02/2011 http://www.nytimes.com/2011/02/28/opinion/28mayor.html?_r=1\n", "title": "" }, { "docid": "dd82b14886501a5b7c0d022f906fdb40", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Needed to Voice Opinion\n\nCollective bargaining is needed by people in any job. Within any firm there exist feedback structures that enable workers to communicate with managers and executive decision makers. However, there are some issues which affect workers significantly, but run against the principles of profit, or in this case the overall public good that the state seeks to serve.\n\nIn this situation, a collection of workers are required. This is primarily because if suggested changes go against public interest then a single worker requesting such a change is likely to be rejected. However, it is the indirect benefit to public interest through a workforce that is treated better that must also be considered. But indirect benefit can only truly occur if there are a large number of workers where said indirect benefit can accrue.\n\nSpecifically, indirect benefit includes the happiness of the workforce and thus the creation of a harder working workforce, as well as the prevention of brain drain of the workforce to other professions. When a single person is unhappy for example, the effect is minimal, however if this effect can be proved for a large number of people then an adjustment must be made.\n\nIn order for these ideas to be expressed, workers can either engage in a collective bargaining process with their employer, or take more drastic action such as strikes or protests to raise awareness of the problem. Given that the alternate option is vastly more disruptive, it seems prudent to allow people to do collectively bargain. [1]\n\n[1] “Importance of Collective Bargaining.” Industrial relations. http://industrialrelations.naukrihub.com/importance-of-collective-bargaining.html\n", "title": "" }, { "docid": "c244d194e073d033ae332bcde7fdbe92", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining Leads to an Overpaid Public Sector\n\nThe public sector is often significantly overpaid. The workers within the public sectors of Western liberal democracies often get paid more than people of equal education and experience who are employed in the private sector. In the United States there is a salary premium of 10-20 percent in the public sector. This means that there is likely a waste of resources as these people are being paid more than they should be by the government. [1]\n\nThe reason this happens is that collective bargaining means that workers can often, through the simple idea that they can communicate with the government and have a hand in the decision making process, make their demands much more easily.\n\nFurther, governments in particular are vulnerable during negotiations with unions, due their need to maintain both their political credibility and the cost effectiveness of the services they provide. This is significantly different to private enterprise where public opinion of the company is often significantly less relevant. As such, public sector workers can earn significantly more than their equally skilled counterparts in the private sector. This is problematic because it leads to a drain of workers and ideas from the private sector to the public. This is, in and of itself, problematic because the public sector, due to being shackled to the needs of public opinion often take fewer risks than the private sector and as such results in fewer innovations than work in the private sector.\n\n[1] Biggs, Andrew G. “Why Wisconsin Gov. Scott Walker Is Right About Collective Bargaining.” US News. 25/02/2011 http://www.usnews.com/opinion/articles/2011/02/25/why-wisconsin-gov-scott-walker-is-right-about-collective-bargaining\n", "title": "" }, { "docid": "7bfed64886b051b81467b1e0a5435ea5", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining Hurts the Democratic Process\n\nThe bargain between normal unions and private enterprise involves all parties being brought to the table and talking about the issues that they might have. However, the public sector represents the benefits of taxpayers, the politicians and the unions. The power that unions exercises means that negotiations can happen without the consent or involvement of the public sector’s stakeholders, the public.\n\nEven though power in a democracy is usually devolved to the politicians for this purpose, given the highly politicised nature of union negotiations, government office-holders who supervise union negotiations may act inconsistently with the mandate that the electorate have given them. This is because public unions often command a very large block of voters and can threaten politicians with this block of voters readily. This is not the same as a private business where officials aren’t elected by their workers. As such, collective bargaining rights for public union undermine the ability of taxpayers to dictate where their money is being spent significantly. [1]\n\n[1] “Union Bargaining Just A Dream For Many Gov Workers.” Oregan Herald. 27/02/2011 http://www.oregonherald.com/news/show-story.cfm?id=234947\n", "title": "" }, { "docid": "9903acf8626cfd6c60958908b5c8d075", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Not a Right\n\nWhilst the freedom of association exists under the state and it is true that people should be allowed to communicate with one another and form groups to forward their personal and political interests, it is not true that the freedom of association automatically grants access to the decision making process.\n\nUnions in this instance are problematic because whilst other groups do not have access to special privileges, unions are able to exert a significant and disproportionate amount of influence over the political process through the use of collective bargaining mechanisms. This argument applies to private unions as well, although to a lesser extent, and the banning of collective bargaining for private unions would be principally sound. In the case of unions in the private sector they can cause large amounts of disruption which has a large knock on impact on the economy giving leverage over politicians for whom the economy and jobs are always important issues. For example unions in transport in the private sector are just as disruptive as in the public sector. Even more minor businesses can be significant due to being in supply or logistics chains that are vital for important parts of the economy. [1] The access to the decision making process that unions are granted goes above and beyond the rights that we award to all other groups and as such this right, if it can be called one at all, can easily be taken away as it is the removal of an inequality within our system.\n\nFurther, even if collective bargaining were to be considered a “right,” the government can curtail the rights of individuals and groups of people should it feel the harm to all of society is great enough. We see this with the limits that we put on free speech such that we may prevent the incitement of racial hatred. [2]\n\n[1] Shepardson, David, “GM, Ford warn rail strike could cripple auto industry”, The Detroit News, 30 November 2011, http://www.detnews.com/article/20111130/AUTO01/111300437/GM-Ford-warn-rail-strike-could-cripple-auto-industry\n\n[2] Denholm, David “Guess What: There is no ‘right’ to collective bargaining.” LabourUnionReport.com 21/02/2011 http://www.laborunionreport.com/portal/2011/02/guess-what-there-is-no-right-to-collective-bargaining/\n", "title": "" }, { "docid": "0ad64f72a930d01cb938551f32c554d8", "text": "economic policy employment house would abolish collective bargaining rights Collective Bargaining is Needed to Voice Opinion\n\nThe bargain between normal unions and private enterprise involves all parties being brought to the table and talking about the issues that they might have. However, the public sector represents the benefits of taxpayers, the politicians and the unions. The power that unions exercises means that negotiations can happen without the consent or involvement of the public sector’s stakeholders, the public.\n\nEven though power in a democracy is usually devolved to the politicians for this purpose, given the highly politicised nature of union negotiations, government office-holders who supervise union negotiations may act inconsistently with the mandate that the electorate have given them. This is because public unions often command a very large block of voters and can threaten politicians with this block of voters readily. This is not the same as a private business where officials aren’t elected by their workers. As such, collective bargaining rights for public union undermine the ability of taxpayers to dictate where their money is being spent significantly. [1]\n\n[1] “Union Bargaining Just A Dream For Many Gov Workers.” Oregan Herald. 27/02/2011 http://www.oregonherald.com/news/show-story.cfm?id=234947\n", "title": "" } ]
arguana
754a8776b330af2efd4e3d5f46904577
The social security system is unsustainable in the status quo Social Security is in Crisis. Social Security in the United States, as in most western liberal democracies, is a pay-as-you-go system and has always been so. As such, it is an intergenerational wealth transfer. The solvency of the system therefore relies on favourable demographics; particularly birth rate and longevity. In the United States the birth rate when Social Security was created was 2.3 children per woman but had risen to 3.0 by 1950. Today it is 2.06. The average life expectancy in 1935 was 63 and today it is 75. While this may be representative of an improvement in quality-of-life for many Americans, these demographic changes also indicate the increasing burden that social security systems are being put under. [1] As a result of changing demographic factors, the number of workers paying Social Security payroll taxes has gone from 16 for every retiree in 1950 to just 3.3 in 1997. This ration will continue to decline to just 2 to 1 by 2025. This has meant the tax has been increased thirty times in sixty-two years to compensate. Originally it was just 2 percent on a maximum taxable income of $300, now it is 12.4 percent of a maximum income of $65,400. This will have to be raised to 18 percent to pay for all promised current benefits, and if Medicare is included the tax will have to go to nearly 28 percent. [2] Social Security is an unsuitable approach to protecting the welfare of a retiring workforce. The social security system as it stands is unsustainable, and will place an excessive tax burden on the current working population of the USA, who will be expected to pay for the impending retirement of almost 70 million members of the “baby boomer” generation. This crisis is likely to begin in 2016 when- according to experts- more money will be paid out by the federal government in social security benefits than it will receive in payroll taxes. [3] In many ways Social Security has now just become a giant ponzi scheme. As the Cato Institute has argued: “Just like Ponzi's plan, Social Security does not make any real investments -- it just takes money from later 'investors' or taxpayers, to pay benefits to the scheme’s earlier, now retired, entrants. Like Ponzi, Social Security will not be able to recruit new "investors" fast enough to continue paying promised benefits to previous investors. Because each year there are fewer young workers relative to the number of retirees, Social Security will eventually collapse, just like Ponzi's scheme.” [4] Faced with this impending crisis, privatizing is at worst the best of the 'bad' options. It provides an opportunity to make the system sustainable and to make it fair to all generations by having everyone pay for their own retirement rather than someone else’s. [5] [1] Crane, Edward. "The Case for Privatizing America's Social Security System." CATO Institute. 10 December 1997. http://www.cato.org/testimony/art-22.html [2] Crane, Edward. "The Case for Privatizing America's Social Security System." CATO Institute. 10 December 1997. http://www.cato.org/testimony/art-22.html [3] San Diego Union Tribune. "Privatizing Social Security Still a Good Idea." San Diego Union Tribune. http://www.creators.com/opinion/daily-editorials/privatizing-social-security-still-a-good-idea.html [4] Cato Institute. “Why is Social Security often called a Ponzi scheme?”. Cato Institute. 11 May 1999. http://www.socialsecurity.org/daily/05-11-99.html ; [5] Kotlikoff, Lawrence. "Privatizing social security the right way". Testimony to the Committee on Ways and Means. 3 June 3 1998. http://people.bu.edu/kotlikof/Ways&amp;Means.pdf
[ { "docid": "dd073ce604993483721216eee3fb2aa5", "text": "economic policy society family house would privatize usas social security schemes Social Security is not in crisis and there is no need for privatization. Social Security is completely solvent today, and will be into the future because it has a dedicated income stream that covers its costs and consistently generates a surplus, which today is $2.5 trillion.\n\nProposition’s dire prediction of the collapse of social security’s financial situation is misleading. The Social Security surplus will grow to approximately $4.3 trillion in 2023, and that reserves will be sufficient to pay full benefits through to 2037. Even after this it would still be able to pay 78%. Moreover, there are plenty of ways to reform Social Security to make it more fiscally sound without privatizing it, including simply raising taxes to fund it better. [1]\n\nFurthermore the problem that affects social security of falling numbers of contributors to each retiree will also affect private pensions, at least in the short to medium term, just in a different way. If all younger pensioners went over to just paying for their own future retirement who is to pay for current retirees or those who are shortly to retire. These people will still need to have their pensions paid for. They will not have time to save up a personal pension and so will be relying on current workers – but such workers will not want to pay more when they are explicitly just paying for someone else as they are already paying for themselves separately.\n\n[1] Roosevelt, James.\"Social Security at 75: Crisis Is More Myth Than Fact.\" Huffington Post. 11 August 11 2010. http://www.huffingtonpost.com/james-roosevelt/social-security-at-75-cri_...\n", "title": "" } ]
[ { "docid": "76d70ad791f3e3d4fe603b681972abdc", "text": "economic policy society family house would privatize usas social security schemes Nobel Laureate economist Paul Krugman. Argued in 2004 that: “Social Security is a government program that works, a demonstration that a modest amount of taxing and spending can make people's lives better and more secure. And that's why the right wants to destroy it.\" [1] The problem with Social Security is not that it does not work, nor that it fails the poor. Rather, as Krugman notes, social security uses limited taxation to implement a clear and successful vision of social justice. As a consequence, the social security system has been repeatedly attacked by right wing and libertarian politicians. Such attacks are not motivated by the merits or failure of the social security system itself, but by political ambition and a desire to forcefully implement alternative normative schema within society.\n\nPrivatizing Social Security would require costly new government bureaucracies. From the standpoint of the system as a whole, privatization would add enormous administrative burdens – and costs. The government would need to establish and track many small accounts, perhaps as many accounts as there are taxpaying workers—157 million in 2010. [2] Often these accounts would be too small so that profit making firms would be unwilling to take them on. There would need to be thousands of workers to manage these accounts. In contrast, today’s Social Security has minimal administrative costs amounting to less than 1 per cent of annual revenues. [3]\n\nIt is also unlikely that individuals will be able to invest successfully on their own, although they may believe they can, leading to a great number of retirees actually being worse off after privatization.\n\n[1] Paul Krugman. \"Inventing a crisis.\" New York Times. 7 December 2004. http://www.nytimes.com/2004/12/07/opinion/07krugman.html?_r=2&amp;scp=539&amp;sq...\n\n[2] Wihbey, John, ‘2011 Annual Report by the Social Security Board of Trustees’, Journalist’s Resource, 9 June 2011, http://journalistsresource.org/studies/government/politics/social-security-report-2011/\n\n[3] Anrig, Greg and Wasow, Bernard. \"Twelve reasons why privatizing social security is a bad idea\". The Century Foundation. 14 February 2005. http://tcf.org/media-center/pdfs/pr46/12badideas.pdf\n", "title": "" }, { "docid": "dddb9c67da7f54d1e7a089718b93c701", "text": "economic policy society family house would privatize usas social security schemes Privatizing Social Security would harm economic growth, not help it. Privatization during the current economic crisis would have been disaster, and so doing it now is a risk for any upcoming or future crisis. Privatization in the midst of the greatest economic downturn since the Great Depression would have caused households to have lost even more of their assets, had their investments been invested in the U.S. stock market or in funds exposed to complicated and high risk financial instruments.\n\nPrivatizing social security might therefore increase economic growth in the boom times but this would be at the expense of sharper downturns. Proposition’s argument implicitly assumes that the money at the moment does not improve economic growth. On the contrary the government is regularly investing the money in much the same way as private business would – and often on much more long term projects such as infrastructure that fit better with a long term saving than the way that banks invest.\n", "title": "" }, { "docid": "0add56b3247e2cde35ac948db12e56d9", "text": "economic policy society family house would privatize usas social security schemes Most of these arguments can be undercut by noting that the privatization of Social Security accounts would be voluntary, and thus anyone who believed the argument that the government invests better would be free to leave their account as it is, unchanged.\n\nThose who believe they can do a better job of investing and managing their money on their own should be given the freedom to do so. In this respect it is important to remember the origin of the money in these accounts: it has been paid in by the individuals themselves. As James Roosevelt (CEO of the health insurance firm Tufts Health Plan) notes: \" Those ‘baby boomers’ who are going to bust Social Security when they retire? They have been paying into the system for more than 40 years, generating the large surplus the program has accumulated. Much of the money that baby boomers are and will be drawing on from Social Security, is, and will be, their own.” [1] As it is their money which they have paid in in the first place, members of the baby boomer generation should have a right to choose how they invest –it. If that means choosing to go private and pursue riskier investments, so be it. The money paid out by the social security system belongs to those who paid it in, and the government should not deprive taxpayers from exercising free choice over the uses to which their money is put. Moreover, none of the other arguments adduced by side opposition do anything to address the ways in which Social Security currently harms the poor, the redressing of which alone justifies privatizing Social Security.\n\n[1] Roosevelt, James.\"Social Security at 75: Crisis Is More Myth Than Fact.\" Huffington Post. 11 August 11 2010. http://www.huffingtonpost.com/james-roosevelt/social-security-at-75-cri_b_677058.html\n", "title": "" }, { "docid": "c99d7dfeece009eb28480730912d3df3", "text": "economic policy society family house would privatize usas social security schemes The American people do not oppose privatization -in fact, most support it. A 2010 poll showed overwhelming support for personal accounts. Republican voters support it 65-21, but even Democrat voters like it, 50-36. [1] A poll commissioned by the Cato Institute through the prestigious Public Opinion Strategies polling company showed that 69 percent of Americans favored switching from the pay-as-you-go system to a fully funded, individually capitalized system. Only 11 percent said they opposed the idea. [2] A 1994 Luntz Research poll found that 82 percent of American adults under the age of 35 favored having at least a portion of their payroll taxes invested instead in stocks and bonds. In fact, among the so-called Generation Xers in America, by a margin of two-to-one they think they are more likely to encounter a UFO in their lifetime than they are to ever receive a single Social Security check.\n\nEven more remarkable, perhaps, was a poll taken in 1997 by White House pollster Mark Penn for the Democratic Leadership Council, a group of moderate Democrats with whom President Clinton was affiliated prior to his election. That poll found that 73 percent of Democrats favor being allowed to invest some or all their payroll tax in private accounts. [3] Moreover, the 'alternatives liks raising taxes and reducing benefits are merely kicking the problem further down the road but it will still become a problem at some point. At the same time either raising taxes or reducing benefits would be unfair – raising taxes because it would mean today’s generation of workers paying more than their parents for the same benefit and cutting benefits because it would mean that retirees would be getting less out than they were promised.'\n\nThe alternatives would also be particularly devastating for the poor. Individuals who are hired pay the cost of the so-called employer's share of the payroll tax through reduced wages. Therefore, an increase in the payroll tax would result in less money in workers' going to workers. It is also important to remember that the payroll tax is an extremely regressive tax. Likewise a reduction in benefits would disproportionately hurt the poor since they are more likely than the wealthy to be dependent on Social Security benefits. [4]\n\n[1] Roth, Andrew. \"Privatize Social Security? Hell Yeah!\". Club for Growth.21 September 21 2010. http://www.clubforgrowth.org/perm/?postID=14110\n\n[2] Crane, Edward. \"The Case for Privatizing America's Social Security System.\" CATO Institute. 10 December 1997. http://www.cato.org/testimony/art-22.html\n\n[3] Crane, Edward. \"The Case for Privatizing America's Social Security System.\" CATO Institute. 10 December 1997. http://www.cato.org/testimony/art-22.html\n\n[4] Tanner, Michael. \"Privatizing Social Security: A Big Boost for the Poor.\" CATO. 26 July 1996. http://www.socialsecurity.org/pubs/ssps/ssp4.html\n", "title": "" }, { "docid": "c9c92305e7c32e3a69a360a31071ecf0", "text": "economic policy society family house would privatize usas social security schemes Privatization would increase national savings and provide a new pool of capital for investment that would be particularly beneficial to the poor. As it stands, Social Security is a net loss maker for the American taxpayer, and this situation will only continue to get worse unless privatization is enacted: those born after the baby boom will forfeit 10 cents of every dollar they earn in payments towards the up keep of the Social Security system.\n\nBy contrast, under privatization people would actually save resources that businesses can invest. As Alan Greenspan has pointed out, the economic benefits of privatization of Social Security are potentially enormous. In Chile, as Dr. Piñera has noted, there has been real economic growth of 7 percent a year over the past decade, energized by a savings rate in excess of 20 percent. [1]\n\nMartin Feldstein, a Harvard economist, formerly Chairman of the Council of Economic Advisors under President Reagan, estimated that the present value to the U.S. economy of investing the future cash flow of payroll taxes in real assets would be on the order of $10 to $20 trillion. That would mean a permanent, significant boost to economic growth. [2]\n\n[1] Crane, Edward. \"The Case for Privatizing America's Social Security System.\" CATO Institute. 10 December 1997. http://www.cato.org/testimony/art-22.html\n\n[2] Crane, Edward. \"The Case for Privatizing America's Social Security System.\" CATO Institute. 10 December 1997. http://www.cato.org/testimony/art-22.html\n", "title": "" }, { "docid": "0ad12d4e4ba2c8a5ccabfc7e65ef6811", "text": "economic policy society family house would privatize usas social security schemes Privatising social security will increase the amount of money that reitrees can draw on\n\nPrivate accounts would provide retirees with a higher rate of return on investments. [1] Privatization would give investment decisions to account holders. This does not mean that Social Security money for the under 55’s would go to Wall Street.. This could be left to the individual's discretion. Potentially this could include government funds. But with government’s record of mismanagement, and a $14 trillion deficit, it seems unlikely that many people would join that choice. [2]\n\nAs Andrew Roth argues, \"Democrats will say supporters of personal accounts will allow people's fragile retirement plans to be subjected to the whims of the stock market, but that's just more demagoguery. First, personal accounts would be voluntary. If you like the current system (the one that [can be raided by] politicians), you can stay put and be subjected to decreasingly low returns as Social Security goes bankrupt. But if you want your money protected from politicians and have the opportunity to invest in the same financial assets that politicians invest in their own retirement plans (most are well-diversified long term funds), then you should have that option.\" [3\n\nSocial Security privatization would actually help the economically marginalised in two ways. Firstly, by ending the harm social security currently does; Those at the poverty level need every cent just to survive. Even those in the lower-middle class don’t money to put into a wealth-generating retirement account. They have to rely on social security income to pay the bills when they reach retirement. Unfortunately, current social security pay-outs are at or below the poverty level. The money earned in benefits based on a retiree’s contributions during their working life is less than the return on a passbook savings account. [4]\n\nSecondly, these same groups would be amongst the biggest 'winners' from privatization. By providing a much higher rate of return, privatization would raise the incomes of those elderly retirees who are most in need.\n\nThe current system contains many inequities that leave the poor at a disadvantage. For instance, the low-income elderly are most likely to be dependent on Social Security benefits for most or all of their retirement income. But despite a progressive benefit structure, Social Security benefits are inadequate for the elderly poor's retirement needs. [5]\n\nPrivatizing Social Security would improve individual liberty. Privatization would give all Americans the opportunity to participate in the economy through investments. Everyone would become capitalists and stock owners reducing the division of labour and capital and restoring the ownership that was the initial foundation of the American dream. [6]\n\nMoreover, privatized accounts would be transferable within families, which current Social Security accounts are not. These privatized accounts would be personal assets, much like a house or a 401k account. On death, privatised social security accounts could pass to an individual’s heirs. With the current system, this cannot be done. Workers who have spent their lives paying withholding taxes are, in effect, denied a proprietary claim over money that, by rights, belongs to them. [7]\n\nThis would make privatization a progressive move. Because the wealthy generally live longer than the poor, they receive a higher total of Social Security payments over the course of their lifetimes. This would be evened out if remaining benefits could be passed on. [8] Privatizing Social Security increases personal choice and gives people control over what they paid and thus are entitled to. Overall, therefore, privatizing Social Security would increase the amount of money that marginalised retirees receive and would give all retirees more freedom to invest and distribute social security payments.\n\n[1] Tanner, Michael. \"Privatizing Social Security: A Big Boost for the Poor.\" CATO. 26 July 1996. http://www.socialsecurity.org/pubs/ssps/ssp4.html\n\n[2] Roth, Andrew. \"Privatize Social Security? Hell Yeah!\". Club for Growth.21 September 21 2010. http://www.clubforgrowth.org/perm/?postID=14110\n\n[3] Roth, Andrew. \"Privatize Social Security? Hell Yeah!\". Club for Growth.21 September 21 2010. http://www.clubforgrowth.org/perm/?postID=14110\n\n[4] Tanner, Michael. \"Privatizing Social Security: A Big Boost for the Poor.\" CATO. 26 July 1996. http://www.socialsecurity.org/pubs/ssps/ssp4.html\n\n[5] Tanner, Michael. \"Privatizing Social Security: A Big Boost for the Poor.\" CATO. 26 July 1996. http://www.socialsecurity.org/pubs/ssps/ssp4.html\n\n[6] Tanner, Michael. \"Privatizing Social Security: A Big Boost for the Poor.\" CATO. 26 July 1996. http://www.socialsecurity.org/pubs/ssps/ssp4.html\n\n[7] Roth, Andrew. \"Privatize Social Security? Hell Yeah!\". Club for Growth.21 September 21 2010. http://www.clubforgrowth.org/perm/?postID=14110\n\n[8] Tanner, Michael. \"Privatizing Social Security: A Big Boost for the Poor.\" CATO. 26 July 1996. http://www.socialsecurity.org/pubs/ssps/ssp4.html\n", "title": "" }, { "docid": "20abfc8041b3c327acdf4ec1ac129e8d", "text": "economic policy society family house would privatize usas social security schemes Privatising social security would improve economic growth\n\nPrivatizing social security would enable investment of savings. Commentator Alex Schibuola argues that: \"If Social Security were privatized, people would deposit their income with a bank. People actually save resources that businesses can invest. We, as true savers, get more resources in the future.\" [1] As a result private accounts would also increase investments, jobs and wages. Michael Tanner of the think tank the Cato Institute argues: \"Social Security drains capital from the poorest areas of the country, leaving less money available for new investment and job creation. Privatization would increase national savings and provide a new pool of capital for investment that would be particularly beneficial to the poor.\" [2]\n\nCurrently Social Security represents a net loss for taxpayers and beneficiaries. Social Security, although key to the restructuring the of USA’s social contract following the great depression, represents a bad deal for the post-war American economy. Moreover, this deal has gotten worse over time. 'Baby boomers' are projected to lose roughly 5 cents of every dollar they earn to the OASI program in taxes net of benefits. Young adults who came of age in the early 1990s and today's children are on course to lose over 7 cents of every dollar they earn in net taxes. If OASI taxes were to be raised immediately by the amount needed to pay for OASI benefits on an on-going basis, baby boomers would forfeit 6 cents of every dollar they earn in net OASI taxes. For those born later it would be 10 cents. [3]\n\nChange could be implemented gradually. Andrew Roth argues: “While Americans in retirement or approaching retirement would probably stay in the current system [if Social Security were to be privatized], younger workers should have the option to invest a portion of their money in financial assets other than U.S. Treasuries. These accounts would be the ultimate \"lock box\" - they would prevent politicians in Washington from raiding the Trust Fund. The truth is that taxpayers bail out politicians every year thanks to Social Security. Congress and the White House spend more money than they have, so they steal money from Social Security to help pay for it. That needs to stop and there is no responsible way of doing that except with personal accounts.” [4] This would make social security much more sustainable as there would no longer be the risk of the money being spent elsewhere.\n\nPut simply, privatizing Social Security would actually boost economic growth and lead to better-protected investments by beneficiaries, benefiting not only themselves but the nation at large. Thus Social Security should be privatized.\n\n[1] Schibuola, Alex. \"Time to Privatize? The Economics of Social Security.\" Open Markets. 16 November 2010. http://www.openmarket.org/2010/11/16/time-to-privatize-the-economics-of-...\n\n[2] Tanner, Michael. \"Privatizing Social Security: A Big Boost for the Poor.\" CATO. 26 July 1996. http://www.socialsecurity.org/pubs/ssps/ssp4.html\n\n[3] Kotlikoff, Lawrence. \"Privatizing social security the right way\". Testimony to the Committee on Ways and Means. 3 June 3 1998. http://people.bu.edu/kotlikof/Ways&amp;Means.pdf\n\n[4] Roth, Andrew. \"Privatize Social Security? Hell Yeah!\". Club for Growth.21 September 21 2010. http://www.clubforgrowth.org/perm/ ?\n", "title": "" }, { "docid": "7ce5fa88bb8fe7aa1c5c22d1a89fb944", "text": "economic policy society family house would privatize usas social security schemes Privatising the social security system would harm economic growth\n\nCreating private accounts could have an impact on economic growth, which in turn would hit social security's future finances. Economic growth could be hit as privatizing Social Security will increase federal deficits and as a result debt significantly, while increasing the likelihood that national savings will decline which will happen as baby boomers retire anyway and draw down their savings.\n\nAn analysis by the Centre on Budget and Policy Priorities shows that the proposed privatization by Obama would add $1 trillion in new federal debt in its first decade of implementation, and a further $3.5 trillion in the following decade. [1] Because households change their saving and spending levels in response to economic conditions privatization is actually more likely to reduce than increase national savings. This is because households that consider the new accounts to constitute meaningful increases in their retirement wealth might well reduce their other saving. Diamond and Orszag argue, 'If anything, our impression is that diverting a portion of the current Social Security surplus into individual accounts could reduce national saving.' That, in turn, would further weaken economic growth and our capacity to pay for the retirement of the baby boomers.\" [2]\n\nThe deficit, and as a result national debt, would increase because trillions of dollars which had previously been paying for current retirees would be taken out of the system to be invested privately. Those who are already retired will however still need to draw a pension so the government would need to borrow the money to be able to pay for these pensions. [3]\n\nContrary to side proposition’s assertions, privatization also would not increase capital available for investment. Proponents of privatization claim that the flow of dollars into private accounts and then into the equity markets will stimulate the economy. However, as the social security system underwent the transition into private ownership, each dollar invested in a financial instrument via the proprietary freedoms afforded to account holders, would result in the government borrowing a dollar to cover pay outs to those currently drawing from the social security system.\n\nThus, the supposed benefit of a privatised social security system is entirely eliminated by increased government borrowing, as the net impact on the capital available for investment is zero. [4]\n\nWhile four fifths of tax dollars for social security is spent immediately the final fifth purchases Treasury securities through trust funds. Privatization would hasten depletion of these funds. President Bush proposed diverting up to 4 percentage points of payroll tax to create the private accounts but with payroll currently 12.4% this would still be significantly more than the one fifth that is currently left over so depleting reserves. Funds now being set aside to build up the Trust Funds to provide for retiring baby boomers would be being used instead to pay for the privatization accounts. The Trust Funds would be exhausted much sooner than the thirty-eight to forty-eight years projected if nothing is done. In such a short time frame, the investments in the personal accounts will not be nearly large enough to provide an adequate cushion. [5]\n\n[1] Anrig, Greg and Wasow, Bernard. \"Twelve reasons why privatizing social security is a bad idea\". The Century Foundation. 14 February 2005. http://tcf.org/media-center/pdfs/pr46/12badideas.pdf\n\n[2] Anrig, Greg and Wasow, Bernard. \"Twelve reasons why privatizing social security is a bad idea\". The Century Foundation. 14 February 2005. http://tcf.org/media-center/pdfs/pr46/12badideas.pdf\n\n[3] Spitzer, Elliot. \"Can we finally kill this terrible idea?\" Slate. 4 February 2009. http://www.slate.com/articles/news_and_politics/the_best_policy/2009/02/privatize_social_security.html\n\n[4] Spitzer, Elliot. \"Can we finally kill this terrible idea?\" Slate. 4 February 2009. http://www.slate.com/articles/news_and_politics/the_best_policy/2009/02/privatize_social_security.html\n\n[5] Anrig, Greg and Wasow, Bernard. \"Twelve reasons why privatizing social security is a bad idea\". The Century Foundation. 14 February 2005. http://tcf.org/media-center/pdfs/pr46/12badideas.pdf\n", "title": "" }, { "docid": "cb7308b4b5642271b0783acb8e78eb71", "text": "economic policy society family house would privatize usas social security schemes Privatising social security will harm retirees\n\nAs Greg Anrig and Bernard Wasow of the non-partisan think tank the Century Foundation argue: \"Privatization advocates like to stress the appeal of 'individual choice' and 'personal control,' while assuming in their forecasts that everyone’s accounts will match the overall performance of the stock market. But… research by Princeton University economist Burton G. Malkiel found that even professional money managers over time significantly underperformed indexes of the entire market.” [1] Most people don’t have the knowledge to manage their own investments. A Securities and Exchange Commission report showed the extent of financial illiteracy for example half of adults don’t know what a stock market is, half don’t understand the purpose of diversifying investments and 45% believe it provides “a guarantee that [their] portfolio won’t suffer if the stock market falls” [2] Including all the management costs it is safe to say that growth from individual accounts will be lower than the market average.\n\nThe private sector is therefore in no better a position to make investment decisions than the state. Privatised accounts would bring their own problems. They are vulnerable to market downturns. Despite crashes the long term return from shares has always been positive. But this does not help those that hit retirement age during a period when the stock market is down. With private pensions people would be relying on luck that they retire at the right time or happened to pick winning stocks. [3]\n\nThe economist Paul Krugman has pointed out, privatizers make incredible assumptions about the likely performance of the market in order to be able to justify their claim that private accounts would outdo the current system. The price-earnings ratio would need to be around 70 to 1 by 2050. This is unrealistic and would be an immense bubble as a P/E ratio of 20 to 1 is considered more normal today. [4]\n\nIf returns are low then there the added worry that privatized social security may not beat inflation. This would mean that retiree’s pensions become worth less and less. At the moment Social Security payouts are indexed to wages, which historically have exceeded inflation so providing protection. Privatizing social security would have a big impact on those who want to remain in the system through falling tax revenues. Implementing private accounts will take 4 per-cent of the 12.4 per-cent taken from each worker’s annual pay out of the collective fund. Thus, almost a 3rd of the revenue generated by social security taxes will be removed. Drastic benefit cuts or increased taxes will have to occur even sooner, which is a recipe for disaster. [5]\n\nIt is for reasons such as these that privatization of similar social security systems has disappointed elsewhere, as Anrig and Wasow argue: \"Advocates of privatization often cite other countries, such as Chile and the United Kingdom, where the governments pushed workers into personal investment accounts to reduce the long-term obligations of their Social Security systems, as models for the United States to emulate. But the sobering experiences in those countries actually provide strong arguments against privatization. A report last year from the World Bank, once an enthusiastic privatization proponent, expressed disappointment that in Chile, and in most other Latin American countries that followed in its footsteps, “more than half of all workers [are excluded] from even a semblance of a safety net during their old age.”” [6] Therefore privatizing Social Security would actually harm retirees and undermine the entire system, and so Social Security should not be privatized.\n\n[1] Anrig, Greg and Wasow, Bernard. \"Twelve reasons why privatizing social security is a bad idea\". The Century Foundation. 14 February 2005. http://tcf.org/media-center/pdfs/pr46/12badideas.pdf\n\n[2] Office of Investor Education and Assistance Securities and Exchange Commission, ‘The Facts on Saving and Investing’, April 1999, http://www.sec.gov/pdf/report99.pdf pp.16-19\n\n[3] Spitzer, Elliot. \"Can we finally kill this terrible idea?\" Slate. 4 February 2009. http://www.slate.com/articles/news_and_politics/the_best_policy/2009/02/privatize_social_security.html\n\n[4] Spitzer, Elliot. \"Can we finally kill this terrible idea?\" Slate. 4 February 2009. http://www.slate.com/articles/news_and_politics/the_best_policy/2009/02/privatize_social_security.html\n\n[5] Anrig, Greg and Wasow, Bernard. \"Twelve reasons why privatizing social security is a bad idea\". The Century Foundation. 14 February 2005. http://tcf.org/media-center/pdfs/pr46/12badideas.pdf\n\n[6] Anrig, Greg and Wasow, Bernard. \"Twelve reasons why privatizing social security is a bad idea\". The Century Foundation. 14 February 2005. http://tcf.org/media-center/pdfs/pr46/12badideas.pdf\n", "title": "" }, { "docid": "d5902944c6af806efa2c5a456a40b26d", "text": "economic policy society family house would privatize usas social security schemes The problems with the social security are systemic, not inherent\n\nSocial security is currently solvent and will be into the future due to its dedicated income stream that consistently generates a surplus, which today is $2.5 trillion. This surplus will even grow to approximately $4.3 trillion in 2023, It is only after 2037 when there will begin to be a deficit.(11)\n\nSide opposition will concede that there is a long-run financing problem, but it is a problem of modest size. There would only need to be revenues equal to 0.54% of GDP to extend the life of the social security trust fund into the 22nd century, with no change in benefits. This is only about one-quarter of the revenue lost each year because of President Bush's tax cuts. [1]\n\nBudget shortfalls- of the sort that side proposition’s case is based on- Nobel Laureate economist Paul Krugman argues: \" has much more to do with tax cuts - cuts that Mr. Bush nonetheless insists on making permanent - than it does with Social Security. But since the politics of privatization depend on convincing the public that there is a Social Security crisis, the privatizers have done their best to invent one.\" [2]\n\nKrugman goes on to argue against the twisted logic of privatization: “My favorite example of their three-card-monte logic goes like this: first, they insist that the Social Security system's current surplus and the trust fund it has been accumulating with that surplus are meaningless. Social Security, they say, isn't really an independent entity - it's just part of the federal government… the same people who claim that Social Security isn't an independent entity when it runs surpluses also insist that late next decade, when the benefit payments start to exceed the payroll tax receipts, this will represent a crisis - you see, Social Security has its own dedicated financing, and therefore must stand on its own. There's no honest way anyone can hold both these positions, but very little about the privatizers' position is honest. They come to bury Social Security, not to save it. They aren't sincerely concerned about the possibility that the system will someday fail; they're disturbed by the system's historic success.” [3]\n\nThere are many other ways to improve and reform Social Security without privatizing it. Robert L. Clark, an economist at North Carolina State University who specializes in aging issues, formerly served as a chairman of a national panel on Social Security's financial status; he has said that future options for Social Security are clear: \"You either raise taxes or you cut benefits. There are lots of ways to do both.\" These alternatives are also backed by the American people. The American people, despite voting for Republicans, have said over and over in polls that they would pay more in taxes to save entitlements such as Social Security. [4] Therefore Social Security is not fundamentally unsound, and alternative reforms should be made without privatizations.\n\n[1] Paul Krugman. \"Inventing a crisis.\" New York Times. 7 December 2004. http://www.nytimes.com/2004/12/07/opinion/07krugman.html?_r=2&amp;scp=539&amp;sq...\n\n[2] Paul Krugman. \"Inventing a crisis.\" New York Times. 7 December 2004. http://www.nytimes.com/2004/12/07/opinion/07krugman.html?_r=2&amp;scp=539&amp;sq...\n\n[3] Paul Krugman. \"Inventing a crisis.\" New York Times. 7 December 2004. http://www.nytimes.com/2004/12/07/opinion/07krugman.html?_r=2&amp;scp=539&amp;sq...\n\n[4] Dick, Stephen. \"Op-Ed: Yes, leave Social Security alone.\" CNHI News Service. 19 November 2010. http://record-eagle.com/opinion/x877132458/Op-Ed-Yes-leave-Social-Securi...\n", "title": "" } ]
arguana
c08c7bf7967b3a6f4401b8851a827853
Sovereign Wealth funds are not transparent Sovereign wealth funds suffer from an almost total lack of transparency. Most countries maintain secrecy about the size of their funds and the extent of their holdings, their accountability to government, their investment strategies and their approach to risk management. Without knowing these things, it is impossible to gauge whether political or economic objectives will dominate the SWFs’ behaviour, or indeed whether they will make safe and responsible shareholders in any business – secrecy breeds corruption. For these reasons, Jeffrey Garten of Yale has argued that SWFs should be obliged to publish independently audited accounts twice a year. He has also pointed out that many countries operating SWFs protect their domestic economy from foreign competition and investment. We should demand reciprocity, so that countries seeking investments abroad must open up their own economies fully before they are allowed to hold significant assets elsewhere. [1] [1] Garten, Jeffrey, ‘We need rules for sovereign funds, 2007. http://www.ft.com/cms/s/0/0b5e0808-454a-11dc-82f5-0000779fd2ac.html#axzz...
[ { "docid": "e1ae0626c6910f4dedf8f7862acc4156", "text": "finance economy general house would act regulate activities sovereign wealth Transparency is a good thing, but it would be unfair to single out sovereign wealth funds for special punishment over this issue. Hedge funds and private equity groups are even less transparent than SWFs, and their influence in the global economy is much greater. [1] Some countries (e.g. Norway) already operate very transparent investment strategies. Many have agreed to the Santiago Principles which encourage transparency and disclosure of financial information. [2] It is likely that other countries will come over time to follow their lead voluntarily, as it is in the interest of their own citizens to see that the state is managing their money in an efficiently and honestly.\n\n[1] Avendaño, Rolando, and Santiso, Javier, ‘Are Sovereign Wealth Funds’ Investments Politically Biased? A Comparison with Mutual Funds’, 2009, p.9. http://www.oecd.org/dataoecd/43/0/44301172.pdf\n\n[2] Ibid\n", "title": "" } ]
[ { "docid": "0488a719a8237d73d47e9c9256684f8c", "text": "finance economy general house would act regulate activities sovereign wealth Fears about national security are greatly overblown, and are often simply an attempt to justify protectionist measures. Very few companies pose a national security risk, and those that do are covered by existing regulations – so that, for example, the USA could veto Dubai Port World’s bid to take over American ports. Most SWFs do not seek full control of companies they invest in, so they are not in a position to manipulate their assets for political gain, even if they wished to. [1] In reality, countries set up SWFs for economic reasons and they represent a major national investment, the value of which would be expensively destroyed if they once tried to abuse their position. Nor are there any actual examples of a country trying to exert political influence through its sovereign wealth fund. Overall, tying a wide variety of states into the international economic and financial system is beneficial, as it gives them a stake in the peace which the global economy needs for prosperity and so makes them less likely to pursue aggressive foreign policies. Conversely, alienating the governments of other states by designating them as dangerous predators who cannot be allowed to invest in our companies is a sure way to create enemies.\n\n[1] Rose, Paul, ‘Sovereign Wealth Funds: Active or Passive Investors?’, 2008. http://thepocketpart.org/2008/11/24/rose.html\n", "title": "" }, { "docid": "b3e96a4230e2a39a64efb4c914f77063", "text": "finance economy general house would act regulate activities sovereign wealth Regulations already exist to prevent foreign investments that might compromise national security. [1] Other than this it would be unfair to discriminate against certain classes of investors. Wealth-creating capitalism relies upon investors seeking to maximise the value of their investments. Without voting rights or the possibility of exercising majority control of a company, SWFs would be unable to ensure that managers were working hard on their behalf, allocating resources efficiently and being held accountable for their decisions.\n\n[1] Gibson, Ronald J., and Milhaupt, Curtis J., ‘Sovereign Wealth Funds and Corporate Governance: A Minimal Solution to the New Mercantilism’, 2009. http://legalworkshop.org/2009/07/19/sovereign-wealth-funds-and-corporate...\n", "title": "" }, { "docid": "cf76c15b1984da5565496d9f7fbf46fe", "text": "finance economy general house would act regulate activities sovereign wealth While it may be true that the state is often a bad manager of assets and businesses in this case the state is not usually involved in the management of the assets. This is being done through the wealth fund which is often in large part run by people whose background is in finance rather than in government. This use of external independent asset managers in itself should be enough to ease worries over state control. [1] Because SWFs don’t seek to have control over the majority of the businesses they invest in discredited government economic planning is not an issue. [2] Indeed SWFs are operating much more like private companies than state owned enterprises.\n\n[1] Mezzacapo, Simone, ‘The so-called “Sovereign Wealth Funds”: regulatory issues, financial stability and prudential supervision’, 2009, p.46. http://ec.europa.eu/economy_finance/publications/publication15064_en.pdf\n\n[2] Rose, Paul, ‘Sovereign Wealth Funds: Active or Passive Investors?’, 2008. http://thepocketpart.org/2008/11/24/rose.html\n", "title": "" }, { "docid": "c6b6f286ac80d33ef8c0ebee47f356f8", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign wealth funds are not new and they are still only a tiny part of the global financial system. They represent only about 2% of global traded securities, and are dwarfed by other financial actors such as mutual funds, or private equity groups and hedge funds. [1] What is more, in comparison with these other players in the global financial system, SWFs are long-term investors looking many years, even decades into the future. This means that they are likely to bring calm, rather than irrational volatility to markets, as they will not be rushed into dumping assets based on a few months of bad data.\n\n[1] The Economist, ‘Sovereign Wealth Funds Asset-backed insecurity’, 2008, http://www.economist.com/node/10533428\n", "title": "" }, { "docid": "430640af234dae7518a347cb058d4010", "text": "finance economy general house would act regulate activities sovereign wealth In many cases sovereign wealth funds are not even good for the states that own them. Almost all are emerging economies with limited financial expertise available to them, and they are not equipped to invest the money wisely. This has led to SWFs paying inflated prices for dodgy western companies, whose share price has subsequently collapsed, resulting in the loss of billions of dollars of national wealth for example China Investment Corporation lost $500million on Blackstone, Qatar Investment Authority may have lost as much as $2billion in its attempt to buy Sainsburys. [1] Surely it would be better to invest the money at home, or even return it to their people in the form of lower taxes.\n\n[1] The Economist, ‘The rise of state capitalism’, 2008. http://www.economist.com/node/12080735\n", "title": "" }, { "docid": "35f453517c3a2546743d3a80c0e1fd93", "text": "finance economy general house would act regulate activities sovereign wealth The amounts sovereign wealth funds invest in the poorest countries is tiny compared to their overall portfolio. In 2008 the head of the World Bank Robert Zollick was attempting to persuade sovereign wealth funds to invest just 1% of their assets in Africa. [1] Investment by SWFs in Africa is not all good. Sovereign wealth funds are guilty of bad behaviour in the developing world. Some government-backed firms from China and the Arab world (not all of the SWFs) have provided capital to maintain some of Africa’s worst rulers in power, in exchange for the opportunity to gain access to the natural resources of their misruled states. Sudan for example has sold 400,000 hectares to the United Arab Emirates. [2] This has allowed dictators to ignore the conditions (e.g. for political freedoms and economic reforms) attached to funding offered by western aid donors and international institutions such as the World Bank. It also contrasts sharply with the behaviour of western companies, who are led to act more responsibly by pressures from their own governments, investors and media.\n\n[1] Stilwell Amy, and Chopra, Geetanjali S., ‘Sovereign Wealth Funds Should Invest in Africa, Zoellick Says’, 2008. http://web.worldbank.org/WBSITE/EXTERNAL/NEWS/0,,contentMDK:21711325~pag...\n\n[2] The Economist, ‘Buying farmland abroad, Outsourcing’s third wave’, 2009. http://www.economist.com/node/13692889\n", "title": "" }, { "docid": "4cb04f7c692a5e8755764b9894acf5e9", "text": "finance economy general house would act regulate activities sovereign wealth Fears about the unrestrained influence of sovereign wealth funds will likely stimulate wider protectionism anyway if effective regulation is not introduced. Protectionist politicians may exploit fears of foreigners to restrict any kind of foreign investment, and seek to build up national champions as a defensive measure. This risks losing all the economic benefits of globalisation, such as opportunities to unwind financial imbalances and to spread expertise, while directing capital to areas where it can have the greatest impact. Better to regulate SWFs now for fear of a greater backlash later.\n", "title": "" }, { "docid": "e5097649298ef2ceea0095a824edd295", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign Wealth Funds could potentially help the financial system but they will only do so if it is in the national interest of their country to do so. It is this political dimension that is the reason for more regulation. Moreover regulation of SWFs will not prevent these funds from helping the global financial system. They will still be free to invest. Moreover it does not reduce the incentives for them to do so either, regulation will make no difference to a state’s motivations in a time of crisis – the national interest will remain key.\n", "title": "" }, { "docid": "43a96ceefb177dcd1f984e37766d8ec0", "text": "finance economy general house would act regulate activities sovereign wealth SWFs can harm national security\n\nSovereign wealth funds raise worrying issues about national security. Unlike mutual funds or private equity groups, which seek only to maximise their investors’ returns, SWFs must be regarded as political entities. Rather than passively holding their assets, they may seek to use their purchases to gain access to natural resources, advanced technologies, including those crucial to our defence, or other strategic sectors. [1] For example Gulf states are using their SWFs to invest in food and natural resources from Latin America. [2] They may engage in economic nationalism, shutting factories in western countries to give an unfair advantage to their own industries [3] . While it has not yet happened they may even attempt economic blackmail, threatening to turn off the lights through their control of energy companies and utilities if governments do not fall in with their foreign policy aims. Allowing countries such as China, Russia and various Gulf states to buy up western companies at will is potentially very dangerous. Even if we regard these states as friendly at the moment, there is no guarantee that they will stay that way, especially as none of them share our political values.\n\n[1] Lyons, Gerard, ‘State Capitalism: The rise of sovereign wealth funds’, 2007, p.14 http://banking.senate.gov/public/_files/111407_Lyons.pdf\n\n[2] Pearson, Samantha, ‘Sovereign wealth funds: Foreign cash has its drawbacks’, 2011, http://www.ft.com/cms/s/0/e5e4f274-6ef5-11e0-a13b-00144feabdc0.html#axzz...\n\n[3] Balin, Bryan J., ‘Sovereign Wealth Funds: A Critical Analysis’, 2008, p.4, http://www.policyarchive.org/handle/10207/bitstreams/11501.pdf\n", "title": "" }, { "docid": "4f79a4a7e6f07252ac4c8b996f3e36d2", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign wealth funds must be regulated\n\nA number of possible models of regulation have been suggested for sovereign wealth funds. Some, such as Gilson and Milhaupt, have argued that state-owned investment vehicles that buy shares abroad should not be allowed voting rights in that stock. [1] Others would put a cap on SWF investments, so that they cannot take a stake of more than, say 20% in any business without government approval within the country the SWF is investing in [2] – meaning that they can only be passive investors. Both these proposals would ensure that they are unable to abuse a dominant position while still allowing countries to benefit from cross-border investment in a globalised economy. At the same time such rules would prevent any broader protectionist backlash so the Sovereign Wealth Funds themselves could welcome the regulation.\n\n[1] Gibson, Ronald J., and Milhaupt, Curtis J., ‘Sovereign Wealth Funds and Corporate Governance: A Minimal Solution to the New Mercantilism’, 2009. http://legalworkshop.org/2009/07/19/sovereign-wealth-funds-and-corporate...\n\n[2] Garten, Jeffrey, ‘We need rules for sovereign funds, 2007, http://www.ft.com/cms/s/0/0b5e0808-454a-11dc-82f5-0000779fd2ac.html#axzz...\n", "title": "" }, { "docid": "1b81af9c8eea9de3524c847b968be03a", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign wealth funds can undermine economic independence\n\nSovereign wealth funds (SWFs) have become very important players in the global economy. The already exceed the assets controlled by hedge funds and will surpass the stock of global foreign exchange reserves. [1] They are now so big that their activities can shift markets, such as Norway’s Government Pension Fund did when short selling Iceland’s banks, leading to panic and instability when they sell assets suddenly. [2] Their purchases can mean that companies owned by other states can end up dominating the economies of smaller countries, undermining their own sovereignty and economic independence. It is also worrying that many SWFs are controlled by undemocratic states which have a questionable commitment to capitalism; should we allow such states to exercise so much power over our economies?\n\n[1] Lipsky, John, ‘Sovereign Wealth Funds: Their Role and Significance’, 2008, http://www.imf.org/external/np/speeches/2008/090308.htm\n\n[2] The Economist, ‘Sovereign Wealth Funds Asset-backed insecurity’, 2008, http://www.economist.com/node/10533428\n", "title": "" }, { "docid": "b8565738b0a6bd98f3f0f85547e16845", "text": "finance economy general house would act regulate activities sovereign wealth State ownership is not a good way of controlling funds\n\nThe ownership of important businesses by sovereign wealth funds runs counter to the economic policy pursued by almost every government over the past 25 years. In the 1970s many states owned nationalised industries as part of an attempt at socialist economic planning that has now been discredited. State ownership distorted incentives, interfered with management and produced decades of underinvestment, poor service to consumers, and national economic failure with the most extreme example being the Soviet Union itself. Since the 1980s countries everywhere have followed the example of Thatcher’s Britain and privatised their industries, freeing them to compete efficiently and to generate more wealth and jobs than they had ever done in state hands. Going back to state ownership of business is a dangerous backward step, especially as it is now foreign governments that are doing the nationalising.\n", "title": "" }, { "docid": "769b31e82ab7595744e377c73136c64f", "text": "finance economy general house would act regulate activities sovereign wealth SWFs can help the financial system in times of trouble\n\nSovereign wealth funds should be credited with coming to the rescue of the global financial system during the turmoil of 2008. With their long-term horizons for a return on their investments they have been willing to provide billions of dollars in new capital to distressed companies, at a time when other sources of funding have headed for the door. [1] Their money has allowed firms to continue trading and so safeguarded jobs at a time of great uncertainty. It has also helped prevent complete collapse of global equities prices, on which many people, through their pension funds, depend for a secure future. Moreover unlike some other types of funds such as hedge funds SWFs have an interest in keeping the global economy stable and reducing the impact of any downturns as their own country is bound to be affected by global economic conditions so responsible investment practices are encouraged. SWFs therefore “can play a shock-absorbing role in global financial markets”. [2]\n\n[1] Beck, Roland, and Fidora, Michael, ‘Sovereign Wealth Funds – Before and Since the Crisis’, 2009, p.363. http://journals.cambridge.org/action/displayFulltext?type=1&amp;fid=6245144&amp;...\n\n[2] Lipsky, John, ‘Sovereign Wealth Funds: Their Role and Significance’, 2008. http://www.imf.org/external/np/speeches/2008/090308.htm\n", "title": "" }, { "docid": "f9ec31777fdd626033b31bba5565d0c0", "text": "finance economy general house would act regulate activities sovereign wealth SWFs should be welcomed for the benefits they bring rather than ostracized for doing what others do.\n\nDeveloped countries are guilty of a great deal of hypocrisy in their attitude to the sovereign wealth funds of emerging economies. In the past their own companies were used as instruments of state power, for example BP’s origins lie in Britain’s attempt to dominate Iran’s (at the time known as Persia) oil wealth. [1] The developed world is always willing to buy assets on the cheap, as shown by American banks buying up Asian banks during the Asian Financial crisis at the end of the 1990s. [2] Recently SWFs have proved willing to channel a great deal of investment into poorer states, particularly in Africa, their investments have already surpassed the IMF and World bank’s, [3] boosting their economies and assisting their long-term development through the provision of infrastructure such as roads and ports. This is a much more equal relationship than that promoted by the west, with its manipulation of aid and loans to maintain political influence in former colonies.\n\n[1] BP, ‘Our history’. http://www.bp.com/extendedsectiongenericarticle.do?categoryId=10&amp;content...\n\n[2] The Economist, ‘The rise of state capitalism’, 2008. http://www.economist.com/node/12080735\n\n[3] Cilliers, Jakkie, ‘Africa and the future’. http://www.regjeringen.no/nb/dep/ud/kampanjer/refleks/innspill/afrika/ci...\n", "title": "" }, { "docid": "3a1ade19bdb449d74f4f075bdaba14d7", "text": "finance economy general house would act regulate activities sovereign wealth SWFs represent good economic management by countries with surpluses\n\nSovereign wealth funds are highly beneficial for states with large financial surpluses. Traditionally they have been run by resource-rich countries which wish to diversify their assets to smooth out the impact of fluctuations in commodity prices on their economies and revenues. The fund can then be drawn down then prices are low. [1] Indeed 30 of 38 SWFs in 2008 were established for such a stabilization role. [2] By holding investments abroad, oil-rich countries such as Qatar and Norway have also built up valuable national reserves against the day when their fossil fuels eventually run out. Kiribati, a pacific island country, put aside wealth from mining guano from fertilizer. Now the guano is all mined but the $400million fund boosts the island’s GDP by a sixth. [3] In any case, allowing all the income from natural resources into your domestic economy is well known to lead to wasteful investments and higher inflation – better to manage the revenues responsibly by using them to create wealth for the future. More recently many Asian countries with big current account surpluses and massive government reserves have sought higher returns than they could get through more traditional investment in US Treasury bonds. Again, this is a responsible strategy pursued by states seeking to do their best for their citizens.\n\n[1] Ziemba, Rachel, ‘Where are the sovereign wealth funds?’, 2008, http://qn.som.yale.edu/content/where-are-sovereign-wealth-funds\n\n[2] Lipsky, John, ‘Sovereign Wealth Funds: Their Role and Significance’, 2008. http://www.imf.org/external/np/speeches/2008/090308.htm\n\n[3] The Economist, ‘Sovereign Wealth Funds Asset-backed insecurity’, 2008. http://www.economist.com/node/10533428\n", "title": "" }, { "docid": "421a1456d3843b4c0400b4145666f804", "text": "finance economy general house would act regulate activities sovereign wealth Restricting SWFs is protectionism\n\nRestricting the activities of sovereign wealth funds is a form of protectionism, which is itself likely to stimulate further demands for barriers against globalisation. Western countries oppose protectionism when it is from other countries preventing western companies investing so it would be hypocritical to want protectionism against those same countries buying the firms that want so much to invest in emerging markets. [1] It should be remembered that almost 40% of SWF assets are controlled by SWFs from advanced industrialised states. [2] As a result SWF investments abroad contribute to greater economic openness around the world. By exposing emerging economies and authoritarian states to developed world standards of transparency, meritocracy and corporate social responsibility, they will help to spread liberal values and raise standards. They will also give many more nations a stake in international prosperity through trade, encouraging cooperation rather than confrontation in foreign policy, and giving a boost to liberalising trade deals at the WTO. Finally as with all protectionism there is the risk that the SWFs will pull out their wealth and not invest as a result of protectionism resulting in lost jobs or jobs that would otherwise be created going somewhere more hospitable to SWFs. [3]\n\n[1] The Economist, ‘The rise of state capitalism’, 2008. http://www.economist.com/node/12080735\n\n[2] Drezner, Daniel W., ‘BRIC by BRIC: The emergent regime for sovereign wealth funds’, 2008, p.5. http://danieldrezner.com/research/swf1.pdf\n\n[3] Ibid, p10\n", "title": "" } ]
arguana
463133dbf645183af65cdd1bdf98c576
Restricting SWFs is protectionism Restricting the activities of sovereign wealth funds is a form of protectionism, which is itself likely to stimulate further demands for barriers against globalisation. Western countries oppose protectionism when it is from other countries preventing western companies investing so it would be hypocritical to want protectionism against those same countries buying the firms that want so much to invest in emerging markets. [1] It should be remembered that almost 40% of SWF assets are controlled by SWFs from advanced industrialised states. [2] As a result SWF investments abroad contribute to greater economic openness around the world. By exposing emerging economies and authoritarian states to developed world standards of transparency, meritocracy and corporate social responsibility, they will help to spread liberal values and raise standards. They will also give many more nations a stake in international prosperity through trade, encouraging cooperation rather than confrontation in foreign policy, and giving a boost to liberalising trade deals at the WTO. Finally as with all protectionism there is the risk that the SWFs will pull out their wealth and not invest as a result of protectionism resulting in lost jobs or jobs that would otherwise be created going somewhere more hospitable to SWFs. [3] [1] The Economist, ‘The rise of state capitalism’, 2008. http://www.economist.com/node/12080735 [2] Drezner, Daniel W., ‘BRIC by BRIC: The emergent regime for sovereign wealth funds’, 2008, p.5. http://danieldrezner.com/research/swf1.pdf [3] Ibid, p10
[ { "docid": "4cb04f7c692a5e8755764b9894acf5e9", "text": "finance economy general house would act regulate activities sovereign wealth Fears about the unrestrained influence of sovereign wealth funds will likely stimulate wider protectionism anyway if effective regulation is not introduced. Protectionist politicians may exploit fears of foreigners to restrict any kind of foreign investment, and seek to build up national champions as a defensive measure. This risks losing all the economic benefits of globalisation, such as opportunities to unwind financial imbalances and to spread expertise, while directing capital to areas where it can have the greatest impact. Better to regulate SWFs now for fear of a greater backlash later.\n", "title": "" } ]
[ { "docid": "430640af234dae7518a347cb058d4010", "text": "finance economy general house would act regulate activities sovereign wealth In many cases sovereign wealth funds are not even good for the states that own them. Almost all are emerging economies with limited financial expertise available to them, and they are not equipped to invest the money wisely. This has led to SWFs paying inflated prices for dodgy western companies, whose share price has subsequently collapsed, resulting in the loss of billions of dollars of national wealth for example China Investment Corporation lost $500million on Blackstone, Qatar Investment Authority may have lost as much as $2billion in its attempt to buy Sainsburys. [1] Surely it would be better to invest the money at home, or even return it to their people in the form of lower taxes.\n\n[1] The Economist, ‘The rise of state capitalism’, 2008. http://www.economist.com/node/12080735\n", "title": "" }, { "docid": "35f453517c3a2546743d3a80c0e1fd93", "text": "finance economy general house would act regulate activities sovereign wealth The amounts sovereign wealth funds invest in the poorest countries is tiny compared to their overall portfolio. In 2008 the head of the World Bank Robert Zollick was attempting to persuade sovereign wealth funds to invest just 1% of their assets in Africa. [1] Investment by SWFs in Africa is not all good. Sovereign wealth funds are guilty of bad behaviour in the developing world. Some government-backed firms from China and the Arab world (not all of the SWFs) have provided capital to maintain some of Africa’s worst rulers in power, in exchange for the opportunity to gain access to the natural resources of their misruled states. Sudan for example has sold 400,000 hectares to the United Arab Emirates. [2] This has allowed dictators to ignore the conditions (e.g. for political freedoms and economic reforms) attached to funding offered by western aid donors and international institutions such as the World Bank. It also contrasts sharply with the behaviour of western companies, who are led to act more responsibly by pressures from their own governments, investors and media.\n\n[1] Stilwell Amy, and Chopra, Geetanjali S., ‘Sovereign Wealth Funds Should Invest in Africa, Zoellick Says’, 2008. http://web.worldbank.org/WBSITE/EXTERNAL/NEWS/0,,contentMDK:21711325~pag...\n\n[2] The Economist, ‘Buying farmland abroad, Outsourcing’s third wave’, 2009. http://www.economist.com/node/13692889\n", "title": "" }, { "docid": "e5097649298ef2ceea0095a824edd295", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign Wealth Funds could potentially help the financial system but they will only do so if it is in the national interest of their country to do so. It is this political dimension that is the reason for more regulation. Moreover regulation of SWFs will not prevent these funds from helping the global financial system. They will still be free to invest. Moreover it does not reduce the incentives for them to do so either, regulation will make no difference to a state’s motivations in a time of crisis – the national interest will remain key.\n", "title": "" }, { "docid": "0488a719a8237d73d47e9c9256684f8c", "text": "finance economy general house would act regulate activities sovereign wealth Fears about national security are greatly overblown, and are often simply an attempt to justify protectionist measures. Very few companies pose a national security risk, and those that do are covered by existing regulations – so that, for example, the USA could veto Dubai Port World’s bid to take over American ports. Most SWFs do not seek full control of companies they invest in, so they are not in a position to manipulate their assets for political gain, even if they wished to. [1] In reality, countries set up SWFs for economic reasons and they represent a major national investment, the value of which would be expensively destroyed if they once tried to abuse their position. Nor are there any actual examples of a country trying to exert political influence through its sovereign wealth fund. Overall, tying a wide variety of states into the international economic and financial system is beneficial, as it gives them a stake in the peace which the global economy needs for prosperity and so makes them less likely to pursue aggressive foreign policies. Conversely, alienating the governments of other states by designating them as dangerous predators who cannot be allowed to invest in our companies is a sure way to create enemies.\n\n[1] Rose, Paul, ‘Sovereign Wealth Funds: Active or Passive Investors?’, 2008. http://thepocketpart.org/2008/11/24/rose.html\n", "title": "" }, { "docid": "b3e96a4230e2a39a64efb4c914f77063", "text": "finance economy general house would act regulate activities sovereign wealth Regulations already exist to prevent foreign investments that might compromise national security. [1] Other than this it would be unfair to discriminate against certain classes of investors. Wealth-creating capitalism relies upon investors seeking to maximise the value of their investments. Without voting rights or the possibility of exercising majority control of a company, SWFs would be unable to ensure that managers were working hard on their behalf, allocating resources efficiently and being held accountable for their decisions.\n\n[1] Gibson, Ronald J., and Milhaupt, Curtis J., ‘Sovereign Wealth Funds and Corporate Governance: A Minimal Solution to the New Mercantilism’, 2009. http://legalworkshop.org/2009/07/19/sovereign-wealth-funds-and-corporate...\n", "title": "" }, { "docid": "cf76c15b1984da5565496d9f7fbf46fe", "text": "finance economy general house would act regulate activities sovereign wealth While it may be true that the state is often a bad manager of assets and businesses in this case the state is not usually involved in the management of the assets. This is being done through the wealth fund which is often in large part run by people whose background is in finance rather than in government. This use of external independent asset managers in itself should be enough to ease worries over state control. [1] Because SWFs don’t seek to have control over the majority of the businesses they invest in discredited government economic planning is not an issue. [2] Indeed SWFs are operating much more like private companies than state owned enterprises.\n\n[1] Mezzacapo, Simone, ‘The so-called “Sovereign Wealth Funds”: regulatory issues, financial stability and prudential supervision’, 2009, p.46. http://ec.europa.eu/economy_finance/publications/publication15064_en.pdf\n\n[2] Rose, Paul, ‘Sovereign Wealth Funds: Active or Passive Investors?’, 2008. http://thepocketpart.org/2008/11/24/rose.html\n", "title": "" }, { "docid": "e1ae0626c6910f4dedf8f7862acc4156", "text": "finance economy general house would act regulate activities sovereign wealth Transparency is a good thing, but it would be unfair to single out sovereign wealth funds for special punishment over this issue. Hedge funds and private equity groups are even less transparent than SWFs, and their influence in the global economy is much greater. [1] Some countries (e.g. Norway) already operate very transparent investment strategies. Many have agreed to the Santiago Principles which encourage transparency and disclosure of financial information. [2] It is likely that other countries will come over time to follow their lead voluntarily, as it is in the interest of their own citizens to see that the state is managing their money in an efficiently and honestly.\n\n[1] Avendaño, Rolando, and Santiso, Javier, ‘Are Sovereign Wealth Funds’ Investments Politically Biased? A Comparison with Mutual Funds’, 2009, p.9. http://www.oecd.org/dataoecd/43/0/44301172.pdf\n\n[2] Ibid\n", "title": "" }, { "docid": "c6b6f286ac80d33ef8c0ebee47f356f8", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign wealth funds are not new and they are still only a tiny part of the global financial system. They represent only about 2% of global traded securities, and are dwarfed by other financial actors such as mutual funds, or private equity groups and hedge funds. [1] What is more, in comparison with these other players in the global financial system, SWFs are long-term investors looking many years, even decades into the future. This means that they are likely to bring calm, rather than irrational volatility to markets, as they will not be rushed into dumping assets based on a few months of bad data.\n\n[1] The Economist, ‘Sovereign Wealth Funds Asset-backed insecurity’, 2008, http://www.economist.com/node/10533428\n", "title": "" }, { "docid": "769b31e82ab7595744e377c73136c64f", "text": "finance economy general house would act regulate activities sovereign wealth SWFs can help the financial system in times of trouble\n\nSovereign wealth funds should be credited with coming to the rescue of the global financial system during the turmoil of 2008. With their long-term horizons for a return on their investments they have been willing to provide billions of dollars in new capital to distressed companies, at a time when other sources of funding have headed for the door. [1] Their money has allowed firms to continue trading and so safeguarded jobs at a time of great uncertainty. It has also helped prevent complete collapse of global equities prices, on which many people, through their pension funds, depend for a secure future. Moreover unlike some other types of funds such as hedge funds SWFs have an interest in keeping the global economy stable and reducing the impact of any downturns as their own country is bound to be affected by global economic conditions so responsible investment practices are encouraged. SWFs therefore “can play a shock-absorbing role in global financial markets”. [2]\n\n[1] Beck, Roland, and Fidora, Michael, ‘Sovereign Wealth Funds – Before and Since the Crisis’, 2009, p.363. http://journals.cambridge.org/action/displayFulltext?type=1&amp;fid=6245144&amp;...\n\n[2] Lipsky, John, ‘Sovereign Wealth Funds: Their Role and Significance’, 2008. http://www.imf.org/external/np/speeches/2008/090308.htm\n", "title": "" }, { "docid": "f9ec31777fdd626033b31bba5565d0c0", "text": "finance economy general house would act regulate activities sovereign wealth SWFs should be welcomed for the benefits they bring rather than ostracized for doing what others do.\n\nDeveloped countries are guilty of a great deal of hypocrisy in their attitude to the sovereign wealth funds of emerging economies. In the past their own companies were used as instruments of state power, for example BP’s origins lie in Britain’s attempt to dominate Iran’s (at the time known as Persia) oil wealth. [1] The developed world is always willing to buy assets on the cheap, as shown by American banks buying up Asian banks during the Asian Financial crisis at the end of the 1990s. [2] Recently SWFs have proved willing to channel a great deal of investment into poorer states, particularly in Africa, their investments have already surpassed the IMF and World bank’s, [3] boosting their economies and assisting their long-term development through the provision of infrastructure such as roads and ports. This is a much more equal relationship than that promoted by the west, with its manipulation of aid and loans to maintain political influence in former colonies.\n\n[1] BP, ‘Our history’. http://www.bp.com/extendedsectiongenericarticle.do?categoryId=10&amp;content...\n\n[2] The Economist, ‘The rise of state capitalism’, 2008. http://www.economist.com/node/12080735\n\n[3] Cilliers, Jakkie, ‘Africa and the future’. http://www.regjeringen.no/nb/dep/ud/kampanjer/refleks/innspill/afrika/ci...\n", "title": "" }, { "docid": "3a1ade19bdb449d74f4f075bdaba14d7", "text": "finance economy general house would act regulate activities sovereign wealth SWFs represent good economic management by countries with surpluses\n\nSovereign wealth funds are highly beneficial for states with large financial surpluses. Traditionally they have been run by resource-rich countries which wish to diversify their assets to smooth out the impact of fluctuations in commodity prices on their economies and revenues. The fund can then be drawn down then prices are low. [1] Indeed 30 of 38 SWFs in 2008 were established for such a stabilization role. [2] By holding investments abroad, oil-rich countries such as Qatar and Norway have also built up valuable national reserves against the day when their fossil fuels eventually run out. Kiribati, a pacific island country, put aside wealth from mining guano from fertilizer. Now the guano is all mined but the $400million fund boosts the island’s GDP by a sixth. [3] In any case, allowing all the income from natural resources into your domestic economy is well known to lead to wasteful investments and higher inflation – better to manage the revenues responsibly by using them to create wealth for the future. More recently many Asian countries with big current account surpluses and massive government reserves have sought higher returns than they could get through more traditional investment in US Treasury bonds. Again, this is a responsible strategy pursued by states seeking to do their best for their citizens.\n\n[1] Ziemba, Rachel, ‘Where are the sovereign wealth funds?’, 2008, http://qn.som.yale.edu/content/where-are-sovereign-wealth-funds\n\n[2] Lipsky, John, ‘Sovereign Wealth Funds: Their Role and Significance’, 2008. http://www.imf.org/external/np/speeches/2008/090308.htm\n\n[3] The Economist, ‘Sovereign Wealth Funds Asset-backed insecurity’, 2008. http://www.economist.com/node/10533428\n", "title": "" }, { "docid": "43a96ceefb177dcd1f984e37766d8ec0", "text": "finance economy general house would act regulate activities sovereign wealth SWFs can harm national security\n\nSovereign wealth funds raise worrying issues about national security. Unlike mutual funds or private equity groups, which seek only to maximise their investors’ returns, SWFs must be regarded as political entities. Rather than passively holding their assets, they may seek to use their purchases to gain access to natural resources, advanced technologies, including those crucial to our defence, or other strategic sectors. [1] For example Gulf states are using their SWFs to invest in food and natural resources from Latin America. [2] They may engage in economic nationalism, shutting factories in western countries to give an unfair advantage to their own industries [3] . While it has not yet happened they may even attempt economic blackmail, threatening to turn off the lights through their control of energy companies and utilities if governments do not fall in with their foreign policy aims. Allowing countries such as China, Russia and various Gulf states to buy up western companies at will is potentially very dangerous. Even if we regard these states as friendly at the moment, there is no guarantee that they will stay that way, especially as none of them share our political values.\n\n[1] Lyons, Gerard, ‘State Capitalism: The rise of sovereign wealth funds’, 2007, p.14 http://banking.senate.gov/public/_files/111407_Lyons.pdf\n\n[2] Pearson, Samantha, ‘Sovereign wealth funds: Foreign cash has its drawbacks’, 2011, http://www.ft.com/cms/s/0/e5e4f274-6ef5-11e0-a13b-00144feabdc0.html#axzz...\n\n[3] Balin, Bryan J., ‘Sovereign Wealth Funds: A Critical Analysis’, 2008, p.4, http://www.policyarchive.org/handle/10207/bitstreams/11501.pdf\n", "title": "" }, { "docid": "4f79a4a7e6f07252ac4c8b996f3e36d2", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign wealth funds must be regulated\n\nA number of possible models of regulation have been suggested for sovereign wealth funds. Some, such as Gilson and Milhaupt, have argued that state-owned investment vehicles that buy shares abroad should not be allowed voting rights in that stock. [1] Others would put a cap on SWF investments, so that they cannot take a stake of more than, say 20% in any business without government approval within the country the SWF is investing in [2] – meaning that they can only be passive investors. Both these proposals would ensure that they are unable to abuse a dominant position while still allowing countries to benefit from cross-border investment in a globalised economy. At the same time such rules would prevent any broader protectionist backlash so the Sovereign Wealth Funds themselves could welcome the regulation.\n\n[1] Gibson, Ronald J., and Milhaupt, Curtis J., ‘Sovereign Wealth Funds and Corporate Governance: A Minimal Solution to the New Mercantilism’, 2009. http://legalworkshop.org/2009/07/19/sovereign-wealth-funds-and-corporate...\n\n[2] Garten, Jeffrey, ‘We need rules for sovereign funds, 2007, http://www.ft.com/cms/s/0/0b5e0808-454a-11dc-82f5-0000779fd2ac.html#axzz...\n", "title": "" }, { "docid": "dbdaa1f47874cb0610e08c020ba40f26", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign Wealth funds are not transparent\n\nSovereign wealth funds suffer from an almost total lack of transparency. Most countries maintain secrecy about the size of their funds and the extent of their holdings, their accountability to government, their investment strategies and their approach to risk management. Without knowing these things, it is impossible to gauge whether political or economic objectives will dominate the SWFs’ behaviour, or indeed whether they will make safe and responsible shareholders in any business – secrecy breeds corruption. For these reasons, Jeffrey Garten of Yale has argued that SWFs should be obliged to publish independently audited accounts twice a year. He has also pointed out that many countries operating SWFs protect their domestic economy from foreign competition and investment. We should demand reciprocity, so that countries seeking investments abroad must open up their own economies fully before they are allowed to hold significant assets elsewhere. [1]\n\n[1] Garten, Jeffrey, ‘We need rules for sovereign funds, 2007. http://www.ft.com/cms/s/0/0b5e0808-454a-11dc-82f5-0000779fd2ac.html#axzz...\n", "title": "" }, { "docid": "1b81af9c8eea9de3524c847b968be03a", "text": "finance economy general house would act regulate activities sovereign wealth Sovereign wealth funds can undermine economic independence\n\nSovereign wealth funds (SWFs) have become very important players in the global economy. The already exceed the assets controlled by hedge funds and will surpass the stock of global foreign exchange reserves. [1] They are now so big that their activities can shift markets, such as Norway’s Government Pension Fund did when short selling Iceland’s banks, leading to panic and instability when they sell assets suddenly. [2] Their purchases can mean that companies owned by other states can end up dominating the economies of smaller countries, undermining their own sovereignty and economic independence. It is also worrying that many SWFs are controlled by undemocratic states which have a questionable commitment to capitalism; should we allow such states to exercise so much power over our economies?\n\n[1] Lipsky, John, ‘Sovereign Wealth Funds: Their Role and Significance’, 2008, http://www.imf.org/external/np/speeches/2008/090308.htm\n\n[2] The Economist, ‘Sovereign Wealth Funds Asset-backed insecurity’, 2008, http://www.economist.com/node/10533428\n", "title": "" }, { "docid": "b8565738b0a6bd98f3f0f85547e16845", "text": "finance economy general house would act regulate activities sovereign wealth State ownership is not a good way of controlling funds\n\nThe ownership of important businesses by sovereign wealth funds runs counter to the economic policy pursued by almost every government over the past 25 years. In the 1970s many states owned nationalised industries as part of an attempt at socialist economic planning that has now been discredited. State ownership distorted incentives, interfered with management and produced decades of underinvestment, poor service to consumers, and national economic failure with the most extreme example being the Soviet Union itself. Since the 1980s countries everywhere have followed the example of Thatcher’s Britain and privatised their industries, freeing them to compete efficiently and to generate more wealth and jobs than they had ever done in state hands. Going back to state ownership of business is a dangerous backward step, especially as it is now foreign governments that are doing the nationalising.\n", "title": "" } ]
arguana
7f6299183384e4652388751026927f76
Technology has enabled Africa’s cultural industries to grow. Technology has enabled the development of entrepreneurial ideas for business, but also within Africa’s cultural industry. Access to video recording mobile phones, the internet, and televised publications has created a new culture of expression for African youths. Cultural industries are raising critical questions for politics, and empowering youth to tell their stories. The use of journalism has become mobilised by youths - as seen in initiatives such as, African Slum Voices, of which are encouraging youths to pro-actively raise their opinions and voices on issues occurring within their communities. Furthermore, the music and film industry in Africa has arisen as a result of access to new technologies at a lower-cost. Two key components responsible for the growth of Nollywood (Nigeria’s Film Industry) include access to digital technology and entrepreneurship. Youths have become vital within Nollywood, as actors, producers and editors. Today Nollywood’s low-budget films have inspired the growth of regional film industries across Africa and contributed to its status as the third largest film industry. Nollywood’s revenue stand’s at around $200mn a year [1] . [1] See further readings: ABN, 2013.
[ { "docid": "51d8f3963d4ac51d78fca0311d54cbed", "text": "business international africa computers phones house believes new technologies Cultural industries don’t always provide a positive role. If entrepreneurial youths today are using technology to create films on witchcraft in the public sphere, what effect will this have on future generations? Growth cant just rely on creative industries as there needs to be money created to drive demand for these films, and any money that might be made by the creative industries are undermined by piracy. Without a solution small time films are hardly the most secure of jobs.\n", "title": "" } ]
[ { "docid": "16d25984d21c5ffdb9b2437268a5d0e6", "text": "business international africa computers phones house believes new technologies Currently 3 in 4 youths work informally or within vulnerable employment - working without a formal written contract (Work4Youth, 2013). Although technology may create new markets it will not change the type of employment youths engage in. The use of technology will mean a majority of youths will continue to work informally - without access to social security, a valuable pension scheme, and social protection in the event of a crisis. Self-employment and having the flexibility to connect to different markets provides a temporary fix and income. Stability and security is not provided for youths.\n", "title": "" }, { "docid": "fc554952adb8b6785391be52c59a00e3", "text": "business international africa computers phones house believes new technologies Despite programs distributing technology into schools does the availability of technology provides future benefits? Having a tablet does not ensure teachers are well-trained to assist and guide the children. Without proper oversight it might prove more of a distraction. Technology in schools might also mean students having technology substituted for teachers. With programs still being implemented, and results variable, the causality between technology, education, and the rise of well educated, motivated, youths remains precarious.\n", "title": "" }, { "docid": "4fafb682c30e4e041beb3c8f45b1ddbf", "text": "business international africa computers phones house believes new technologies Such platforms are known, and accessible, by a minority within Africa - limiting who benefits from the technology available. Rising entrepreneurs across Africa typically are able to access resources required and network their ideas, whilst a majority of youths remain out of the innovation loop.\n\nAs inequality disparities continue to increase in Africa, a similar trend is identifiable to youth technology and entrepreneurialism. Entrepreneurs rising in Africa show the future of a ‘young millionaire’s club’. They hold the right connections, access to credit and electricity, and time to apply to their business model. The millionaire entrepreneurs continue to create new technologies - not vice-versa.\n", "title": "" }, { "docid": "5be915f66646debec550c2463db42410", "text": "business international africa computers phones house believes new technologies Recent evidence by the World Bank indicates unemployment is not only due to the limited availability of jobs. A high proportion of youths have been identified as ‘idle’ - not in school, training, or work, and not actively seeking employment. Although variations are found, in 2009 only ~2% of male youths, aged 15-24, and ~1% of female youths, who were not in school or employment in Tanzania, were actively looking for work [1] . Without motivation technology will not make a difference.\n\n[1] WDR, 2013.\n", "title": "" }, { "docid": "35e1de149bc5cc95fdffd8847f81d3ac", "text": "business international africa computers phones house believes new technologies Credit is now becoming more accessible through technology. Mobile-banking schemes such as MPESA across East Africa and ZAAB in Somalia, use mobile phones to transfer money and payments. The mobile banking scheme is increasing the efficiency of borrowing money from social circles, enabling quick transactions to be carried out, and introducing users to a wealth of market opportunities.\n\nTechnology is integral to entrepreneurship.\n", "title": "" }, { "docid": "bfc20c71593c41edfab1f125f322b16e", "text": "business international africa computers phones house believes new technologies Several examples may be found on established partnerships between multinational technology firms and civil-society groups. Microsoft has become a key investor in South Africa to tackle youth unemployment.\n\nMicrosoft has established a Students to Business initiative in South Africa, aiming to build human capital and provide professional skills to students, thus assisting job opportunities. Multinational companies are investing in youths as they recognise the burden of high unemployment and the potential talents youth have. By providing young students with key skills and sharing knowledge, a new generation of technology developers, leaders, and entrepreneurs will arise.\n", "title": "" }, { "docid": "e4c0b8299bdf17cdb6415228f5d9430e", "text": "business international africa computers phones house believes new technologies Technology is enhancing security, not threatening it. Measures are being implemented to ensure cyber-security and further technology is creating new, local, initiatives for security on the ground. Ushahidi Crowdmapping - an interactive, collective, mapping tool - was used to expose, and remember, political violence that occurred in Kenya’s 2007 presidential election [1] .\n\n[1] See further readings: Ushahidi, 2013.\n", "title": "" }, { "docid": "bc6193fdf492c72ebb197ceddf893ea4", "text": "business international africa computers phones house believes new technologies The technological revolution across Africa is broad, ranging from mobile technology to internet connectivity. The availability of mobiles has broadened who can use technology - being more inclusive to multiple socio-economic groups.\n\nInternet.org [1] has been established to resolve issues, making connectivity affordable. The initiative, which involves a collaborative partnership between Facebook and technological organisations, has a vision of ensuring access to the internet for the two-thirds who remain unconnected. Connectivity is a fundamental necessity to living in our ‘knowledge economy’. Their mission has centred on three aspects: affordability, improving efficiency, and innovative partnerships to expand the number of people connected. Intervention has therefore focused on removing barriers to accessing information by connecting people.\n\nFurthermore in Kenya, mobile phones have been made accessible to a wider audience through the removal of the general sales tax in 2009.\n\n[1] See further readings: Internet.org, 2013.\n", "title": "" }, { "docid": "e6a5434f4fc0afd77827cbef0eb8bb9c", "text": "business international africa computers phones house believes new technologies Technology has driven youths to identify new markets\n\nA key technology for youths are mobile phones and devices. Across West and East Africa the possession of mobile phones has enabled citizens to network and form solutions to social problems. By 2015, there are expected to be 1 billion mobile cellular subscriptions in Sub-Saharan Africa (Sambira, 2013). This is the first African generation directly accessing high-technology, although uncertainty remains in the amount of youths having access to technology. Through mobile phones new business opportunities, and flows of money, are being created. Furthermore, mobile phones are providing innovative solutions to health care treatment, ensuring better health for future entrepreneurs and youths.\n\nSlimTrader is a positive example [1] . SlimTrader uses mobile phones to provide a range of vital services - from airplane and bus tickets to medicine. The innovative e-commerce provides a space to advertise skills, products, and opportunities - to, on the one hand, identify new consumer demands; and on another hand, create notices to exchange goods.\n\nMobile technology is making it faster, quicker, and simpler to tap into new markets [2] .\n\n[1] See further readings: SlimTrader, 2013; Ummeli, 2013.\n\n[2] See further readings: Nsehe, 2013. Inspite of challenges Patrick Ngowi has earned millions through the construction of Helvetic Solar Contractors.\n", "title": "" }, { "docid": "afc3687df781559d5d0e414a315388f9", "text": "business international africa computers phones house believes new technologies Technology is building a platform for sharing ideas.\n\nEntrepreneurialism can be encouraged through an awareness, and sharing, of new ideas. The technological revolution has provided a platform for personal expression, delivery of up-to-date news, and the vital sharing of local ideas and thoughts. In Nigeria the Co-Creation Hub has emerged, encouraging an entrepreneurial spirit. Further, Umuntu and Mimiboards’ are connecting individual communities to the web by encouraging local content creation [1] .\n\nSuch platforms are enabling the transfer of knowledge and innovative ideas. Innovative solutions are being introduced to routine problems, such as ‘Mafuta Go’ an app to find the best price for petrol (Christine Ampaire).\n\n[1] See further readings: Co-Creation Hub Nigeria, 2013\n", "title": "" }, { "docid": "7bffd2f7829e44e991dc181b5826ec02", "text": "business international africa computers phones house believes new technologies Technology will lead job growth for youths.\n\nThe rate of unemployment in Sub-Saharan Africa remains above the global average, at 7.55% in 2011, with 77% of the population in vulnerable employment [1] . Economic growth has not been inclusive and jobs are scarce. In particular, rates of youth unemployment, and underemployment, remain a concern [2] . On average, the underutilisation of youths in the labour market across Sub-Saharan Africa stood at 67% in 2012 (Work4Youth, 2013). Therefore 67% of youths are either unemployed, inactive, or in irregular employment. The rate of unemployment varies geographically and across gender [3] .\n\nThere remains a high percentage of youths within informal employment. Technology can introduce a new dynamic within the job market and access to safer employment.\n\nSecure, high quality jobs, and more jobs, are essential for youths. Access to technology is the only way to meet such demands. Technology will enable youths to create new employment opportunities and markets; but also employment through managing, and selling, the technology available.\n\n[1] ILO, 2013.\n\n[2] Definitions: Unemployment is defined as the amount of people who are out of work despite being available, and seeking, work. Underemployment defines a situation whereby the productive capacity of an employed person is underutilised. Informal employment defines individuals working in waged and/or self employment informally (see further readings).\n\n[3] Work4Youth (2013) show, on average, Madagascar has the lowest rate of unemployment (2.2%) while Tanzania has the highest (42%); and the average rate of female unemployment stands higher at 25.3%, in contrast to men (20.2%).\n", "title": "" }, { "docid": "042df77b109de4fc2b67df3b4c7d802a", "text": "business international africa computers phones house believes new technologies Changing education systems and democracy.\n\nTechnology has enabled access to e-books and resources for students and teachers [1] . Such changes have enabled improved efficiency in teaching, with the availability of up-to-date resources and awareness of relevant theories. Furthermore, the ease by which students are able to access multiple resources and buy books online is expanding their intellectual curiosity and library.\n\nIn addition to raising new students, technology can be seen as a tool for democracy. Technology provides a tool for government accountability, transparency in information, and for good governance. Organisations, such as Ushahidi (Crowdmapping) following Kenya’s 2007 post-election violence; and mySociety which updates citizens on parliamentary proceedings in South Africa, show how technology is feeding democratisation for youths [2] .\n\n[1] See further readings: Turcano, 2013.\n\n[2] See further readings: Treisman, 2013; Usahidi, 2013.\n", "title": "" }, { "docid": "6ed98b6d1a2f7ccf3c8fade6fa1eb8b2", "text": "business international africa computers phones house believes new technologies The technological revolution has been hyped.\n\nDebates may be raised as to whether the technological revolution is actually a reality across Africa [1] . Have expectations been too high; the benefits exclusive; and the reality over-exaggerated?\n\nOn the one hand, the type of technology raises significant questions. Although the population with access to a mobile phone has risen, the quality of the phones indicates a hyped-reality. Although technology has become easily accessible, the quality of such technologies puts constraints on what it can be used for. A vast majority of mobile phones are imported from China - at low-cost but also poor quality. Quality testing on imports, and locally produced products, is needed to approve market devices.\n\nOn another hand, the reality of internet connectivity is not high-speed, and therefore of limited use. Better connectivity emerges in certain geographical locations, to those who can afford higher prices, and within temporary fluxes.\n\n[1] See further readings: BBC World Service, 2013.\n", "title": "" }, { "docid": "cd7e666ee16269caed5809c0c36129f5", "text": "business international africa computers phones house believes new technologies Technology has only benefited private companies.\n\nUltimately, technology, its provision, distribution, and function, is based on a business model. Profits are sought and losers emerge. The technology hype has attracted global technology giants, ranging from IBM to Google – a key issue as to whether entrepreneurialism can emerge amongst youths and technology used sustainably. The monopolisation of technology markets by multinational companies puts constraints on the ability for small businesses to break through. Any profits created are not recirculated in their locality, or Africa, but return to the country of origin.\n\nFor entrepreneurialism to be gained, and youth jobs emerge, the technological giants investing in Africa’s rising future need to partner with communities and small businesses.\n", "title": "" }, { "docid": "095fe449a954ab3db540c757252f18f8", "text": "business international africa computers phones house believes new technologies Technology will not result in entrepreneurialism without providing a foundational basis.\n\nThe key constraint for entrepreneurship is the lack of access to finance, credit, and basic infrastructure - whether a computer or technical skills on how to use different systems. Limited accessibility acts as an obstacle to entrepreneurialism.\n\nIn order to encourage an inclusive capability for youths to get involved in entrepreneurial ideas, technology training and equal start-up credit is required. Furthermore, dangers arise where credit has become easily accessible - putting individuals at risk of debt where a lack of protection and payment planning is provided.\n\nKenya’s Uwezo Fund provides a positive example, whereby action has been taken to provide youths with safe credit. The government collaboration is calling for youths to apply for grants and loans in a bid to encourage entrepreneurial activity for all. Loans are interest-free.\n", "title": "" }, { "docid": "63f46ed807650bba7e976445907d27d7", "text": "business international africa computers phones house believes new technologies Technology remains insecure and a security risk.\n\nThe internet remains at risk. Cybersecurity is a key concern, and the prevalence of hacking events across Africa identifies the need to promote security for the new digital users. Cyber-crime costs the Kenyan government around Ksh.2 billion (Mutegi, 2013); and affects around 70% of South Africans. In order to encourage more users in technology their safety, against fraud, hacking, and identity theft, needs to be prioritised. Without security technology can’t help entrepreneurs as customer details, business plans etc can’t be kept private.\n", "title": "" } ]
arguana
13595b6a17ba33be68e4346587cb2e6f
A dam would damage the environment Dams due to their generation of renewable electricity are usually seen as environmentally friendly but such mega projects are rarely without consequences. The Grand Inga would lower the oxygen content of the lower course of the river which would mean a loss of species. This would not only affect the river as the Congo’s delta is a submerged area of 300,000km2 far out into the Atlantic. This system is not yet understood but the plume transmits sediment and organic matter into the Atlantic ocean encouraging plankton offshore contributing to the Atlantic’s ability to be a carbon sink. [1] [1] Showers, Kate, ‘Will Africa’s Mega Dam Have Mega Impacts?’, International Rivers, 5 March 2012, http://www.internationalrivers.org/resources/grand-inga-will-africa%E2%80%99s-mega-dam-have-mega-impacts-1631
[ { "docid": "01894b08f48c8360b8addc0be44d222e", "text": "economic policy environment climate energy water international africa house would Hydroelectric power is clean so would be beneficial in the fight against global warming. Providing such power would reduce the need to other forms of electricity and would help end the problem of cooking fires which not only damage the environment but cause 1.9million lives to be lost globally every year as a result of smoke inhalation. [1] Because the dam will be ‘run of the river’ there won’t be many of the usual problems associated with dams; fish will still be able to move up and down the river and much of the sediment will still be transported over the rapids.\n\n[1] Bunting, Madeleine, ‘How Hillary Clinton’s clean stoves will help African women’, theguardian.com, 21 September 2010, http://www.theguardian.com/commentisfree/cifamerica/2010/sep/21/hillary-clinton-clean-stove-initiative-africa\n", "title": "" } ]
[ { "docid": "65a6b3dcaf3b57f64f0c192ba8af7277", "text": "economic policy environment climate energy water international africa house would The World Bank would be taking a lead role in the project and it proclaims “The World Bank has a zero-tolerance policy on corruption, and we have some of the toughest fiduciary standards of any development agency, including a 24/7 fraud and corruption hotline with appropriate whistle-blower protection.” All documentation would be in the public domain and online so ensuring complete transparency. [1]\n\n[1] Maake, Moyagabo, ‘Concern over SA’s billions in DRC Inga project’, Business Day Live, 24 March 2013, http://www.bdlive.co.za/business/energy/2013/03/24/concern-over-sas-billions-in-drc-inga-project\n", "title": "" }, { "docid": "4f10c5b9b32259715f05e6ca41328317", "text": "economic policy environment climate energy water international africa house would The difficulty of constructing something should not be considered a good argument not to do it. As one of the poorest countries in the world construction will surely have significant support from developed donors and international institutions. Moreover with the energy cooperation treaty between DRC and South Africa there is a guaranteed partner to help in financing and eventually buying the electricity.\n", "title": "" }, { "docid": "b50496b3032931071c15e29a65d4a48f", "text": "economic policy environment climate energy water international africa house would Yes they are. Big international donors like the World Bank who are supporting the project will ensure that there is compensation for those displaced and that they get good accommodation. In a budget of up to $80billion the cost of compensation and relocation is tiny.\n", "title": "" }, { "docid": "82f9b868ec8ac80d2a911bdf93e491f9", "text": "economic policy environment climate energy water international africa house would While it is clear that such an immense project will have an impact we have little idea what that impact might be. Will the builders be local? Will the suppliers be local? It is likely that the benefit will go elsewhere just as the electricity will go to South Africa rather than providing electricity to the poverty stricken Congolese. [1]\n\n[1] Palitza, Kristin, ‘$80bn Grand Inga hydropower dam to lock out Africa’s poor’, Africa Review, 16 November 2011, www.africareview.com/Business---Finance/80-billion-dollar-Grand-Inga-dam-to-lock-out-Africa-poor/-/979184/1274126/-/kkicv7/-/index.html\n", "title": "" }, { "docid": "52806ddf0c998a8db33aa889f2c8fcf1", "text": "economic policy environment climate energy water international africa house would There is currently not enough traffic to justify such a large addition to the project. If it were worthwhile then it could be done without the need for building an immense dam.\n", "title": "" }, { "docid": "e06316d11d65e603ff65f44673b56a09", "text": "economic policy environment climate energy water international africa house would In the short to medium term during the decades the dam is being built investment will surely be concentrated in one place in this vast country; in the west where the dam is, not the east where the conflicts are. Later there is little guarantee that the government will spend the proceeds wisely to develop the country rather than it disappearing through corruption. And this assumes the money flows in from the export of electricity. To enable such exports 3000km of high voltage cable will need to be laid which would be vulnerable to being cut by rebel groups seeking to hurt the government through its wallet. [1]\n\n[1] ‘Explained: The $80 billion Grand Inga Hydropower Project’, ujuh, 21 November 2013, http://www.ujuh.co.za/explained-the-80-billion-grand-inga-hydropower-project/\n", "title": "" }, { "docid": "b536f3c65529f385375788226320e1c1", "text": "economic policy environment climate energy water international africa house would It is not the best solution to Africa’s energy crisis. According to a report by the International Energy Agency as an immense dam requires a power grid. Such a grid does not exist and building such a grid is “not proving to be cost effective in more remote rural areas”. In such low density areas local sources of power are best. [1] DRC is only 34% urban and has a population density of only 30 people per km2 [2] so the best option would be local renewable power.\n\n[1] International Energy Agency, ‘Energy for All Financing access for the poor’, World Energy Outlook, 2011, http://www.worldenergyoutlook.org/media/weowebsite/energydevelopment/weo2011_energy_for_all.pdf p.21\n\n[2] Central Intelligence Agency, ‘Congo, Democratic Republic of the’, The World Factbook, 12 November 2013, https://www.cia.gov/library/publications/the-world-factbook/geos/cg.html\n", "title": "" }, { "docid": "1951f38ff1197d3feb12479bc8109939", "text": "economic policy environment climate energy water international africa house would The cost is too high\n\nThe Grand Inga is ‘pie in the sky’ as the cost is too immense. At more than $50-100 billion it is more than twice the GDP of the whole country. [1] Even the much smaller Inga III project has been plagued by funding problems with Westcor pulling out of the project in 2009. [2] This much smaller project still does not have all the financial backing it needs having failed to get firm commitments of investment from anyone except the South Africans. [3] If private companies won’t take the risk on a much smaller project they won’t on the Grand Inga.\n\n[1] Central Intelligence Agency, ‘Congo, Democratic Republic of the’, The World Factbook, 12 November 2013, https://www.cia.gov/library/publications/the-world-factbook/geos/cg.html\n\n[2] ‘Westcor Drops Grand Inga III Project’, Alternative Energy Africa, 14 August 2009, http://ae-africa.com/read_article.php?NID=1246\n\n[3] ‘DRC still looking for Inga III funding’, ESI-Africa.com, 13 September 2013, http://www.esi-africa.com/drc-still-looking-for-inga-iii-funding/\n", "title": "" }, { "docid": "291e93cb51c9cba10b25386df67a71e7", "text": "economic policy environment climate energy water international africa house would Such a big project is beyond DRC’s capacity\n\nThe Grand Inga dam project is huge while it means huge potential benefits it just makes it more difficult for the country to manage. Transparency international ranks DRC as 160th out of 176 in terms of corruption [1] so it is no surprise that projects in the country are plagued by it. [2] Such a big project would inevitably mean billions siphoned off. Even if it is built will the DRC be able to maintain it? This seems unlikely. The Inga I and II dams only operate at half their potential due to silting up and a lack of maintenance. [3]\n\n[1] ‘Corruption Perceptions Index 2012’, Transparency International, 2012, http://cpi.transparency.org/cpi2012/results/\n\n[2] Bosshard, Peter, ‘Grand Inga -- The World Bank's Latest Silver Bullet for Africa’, Huffington Post, 21 April 2013, http://www.huffingtonpost.com/peter-bosshard/grand-inga-the-world-bank_b_3308223.html\n\n[3] Vasagar, Jeevan, ‘Could a $50bn plan to tame this mighty river bring electricity to all of Africa?’, The Guardian, 25 February 2005, http://www.theguardian.com/world/2005/feb/25/congo.jeevanvasagar\n", "title": "" }, { "docid": "a30be34bb3548a8df3b92828ee73820f", "text": "economic policy environment climate energy water international africa house would Dams displace communities\n\nDams result in the filling of a large reservoir behind the dam because it has raised the level of the water in the case of the Grand Inga it would create a reservoir 15km long. This is not particularly big but the construction would also displace communities. The previous Inga dams also displaced people. Inga I and II were built 30 and 40 years ago, yet the displaced are still in a shabby prefabricated town called Camp Kinshasa awaiting compensation. [1] Are they likely to do better this time around?\n\n[1] Sanyanga, Ruto, ‘Will Congo Benefit from Grand Inga Dam’, International Policy Digest, 29 June 2013, http://www.internationalpolicydigest.org/2013/06/29/will-congo-benefit-from-grand-inga-dam/\n", "title": "" }, { "docid": "9543254f0a02b35552049e3cda32f1b8", "text": "economic policy environment climate energy water international africa house would An immense boost to DRC’s economy\n\nThe Grand Inga dam would be an immense boost to the DRC’s economy. It would mean a huge amount of investment coming into the country as almost all the $80 billion construction cost would be coming from outside the country which would mean thousands of workers employed and spending money in the DRC as well as boosting local suppliers. Once the project is complete the dam will provide cheap electricity so making industry more competitive and providing electricity to homes. Even the initial stages through Inga III are expected to provide electricity for 25,000 households in Kinshasa. [1]\n\n[1] ‘Movement on the Grand Inga Hydropower Project’, ujuh, 20 November 2013, http://www.ujuh.co.za/movement-on-the-grand-inga-hydropower-project/\n", "title": "" }, { "docid": "ccb931ec4f85436abb42791e19b3a274", "text": "economic policy environment climate energy water international africa house would Will enable the rebuilding of DRC\n\nDR Congo has been one of the most war ravaged countries in the world over the last two decades. The Grand Inga provides a project that can potentially benefit everyone in the country by providing cheap electricity and an economic boost. It will also provide large export earnings; to take an comparatively local example Ethiopia earns $1.5million per month exporting 60MW to Djibouti at 7 cents per KwH [1] comparable to prices in South Africa [2] so if Congo were to be exporting 500 times that (at 30,000 MW only 3/4ths of the capacity) it would be earning $9billion per year. This then will provide more money to invest and to ameliorate problems. The project can therefore be a project for the nation to rally around helping create and keep stability after the surrender of the rebel group M23 in October 2013.\n\n[1] Woldegebriel, E.G., ‘Ethiopia plans to power East Africa with hydro’, trust.org, 29 January 2013, http://www.trust.org/item/?map=ethiopia-seeks-to-power-east-africa-with-hydro\n\n[2] Burkhardt, Paul, ‘Eskom to Raise S. Africa Power Price 8% Annually for 5 Years’, Bloomberg, 28 February 2013, http://www.bloomberg.com/news/2013-02-28/south-africa-s-eskom-to-raise-power-prices-8-a-year-for-5-years.html\n", "title": "" }, { "docid": "c5a6d19e3b9760d98807ea4f1150450e", "text": "economic policy environment climate energy water international africa house would The dam would power Africa\n\nOnly 29% of Sub Saharan Africa’s population has access to electricity. [1] This has immense consequences not just for the economy as production and investment is constrained but also on society. The world bank says lack of electricity affects human rights “People cannot access modern hospital services without electricity, or feel relief from sweltering heat. Food cannot be refrigerated and businesses cannot function. Children cannot go to school… The list of deprivation goes on.” [2] Conveniently it is suggested that the “Grand Inga will thus provide more than half of the continent with renewable energy at a low price,” [3] providing electricity to half a billion people so eliminating much of this electricity gap. [4]\n\n[1] World Bank Energy, ‘Addressing the Electricity Access Gap’, World Bank, June 2010, http://siteresources.worldbank.org/EXTESC/Resources/Addressing_the_Electricity_Access_Gap.pdf p.89\n\n[2] The World Bank, ‘Energy – The Facts’, worldbank.org, 2013, http://go.worldbank.org/6ITD8WA1A0\n\n[3] SAinfo reporter, ‘SA-DRC pact paves way for Grand Inga’, SouthAfrica.info, 20 May 2013, http://www.southafrica.info/africa/grandinga-200513.htm#.UqGkNOImZI0\n\n[4] Pearce, Fred, ‘Will Huge New Hydro Projects Bring Power to Africa’s People?’, Yale Environment 360, 30 May 2013, http://e360.yale.edu/feature/will_huge_new_hydro_projects_bring_power_to_africas_people/2656/\n", "title": "" }, { "docid": "260e3f13a30f289b44a3ce422c450e81", "text": "economic policy environment climate energy water international africa house would A dam could make the Congo more usable\n\nWhile the Congo is mostly navigable it is only usable internally. The rapids cut the middle Congo off from the sea. The building of the dams could be combined with canalisation and locks to enable international goods to be easily transported to and from the interior. This would help integrate central Africa economically into the global economy making the region much more attractive for investment.\n", "title": "" } ]
arguana
4b68988916f16dd5c17bc730a34fbce8
Dams displace communities Dams result in the filling of a large reservoir behind the dam because it has raised the level of the water in the case of the Grand Inga it would create a reservoir 15km long. This is not particularly big but the construction would also displace communities. The previous Inga dams also displaced people. Inga I and II were built 30 and 40 years ago, yet the displaced are still in a shabby prefabricated town called Camp Kinshasa awaiting compensation. [1] Are they likely to do better this time around? [1] Sanyanga, Ruto, ‘Will Congo Benefit from Grand Inga Dam’, International Policy Digest, 29 June 2013, http://www.internationalpolicydigest.org/2013/06/29/will-congo-benefit-from-grand-inga-dam/
[ { "docid": "b50496b3032931071c15e29a65d4a48f", "text": "economic policy environment climate energy water international africa house would Yes they are. Big international donors like the World Bank who are supporting the project will ensure that there is compensation for those displaced and that they get good accommodation. In a budget of up to $80billion the cost of compensation and relocation is tiny.\n", "title": "" } ]
[ { "docid": "65a6b3dcaf3b57f64f0c192ba8af7277", "text": "economic policy environment climate energy water international africa house would The World Bank would be taking a lead role in the project and it proclaims “The World Bank has a zero-tolerance policy on corruption, and we have some of the toughest fiduciary standards of any development agency, including a 24/7 fraud and corruption hotline with appropriate whistle-blower protection.” All documentation would be in the public domain and online so ensuring complete transparency. [1]\n\n[1] Maake, Moyagabo, ‘Concern over SA’s billions in DRC Inga project’, Business Day Live, 24 March 2013, http://www.bdlive.co.za/business/energy/2013/03/24/concern-over-sas-billions-in-drc-inga-project\n", "title": "" }, { "docid": "4f10c5b9b32259715f05e6ca41328317", "text": "economic policy environment climate energy water international africa house would The difficulty of constructing something should not be considered a good argument not to do it. As one of the poorest countries in the world construction will surely have significant support from developed donors and international institutions. Moreover with the energy cooperation treaty between DRC and South Africa there is a guaranteed partner to help in financing and eventually buying the electricity.\n", "title": "" }, { "docid": "01894b08f48c8360b8addc0be44d222e", "text": "economic policy environment climate energy water international africa house would Hydroelectric power is clean so would be beneficial in the fight against global warming. Providing such power would reduce the need to other forms of electricity and would help end the problem of cooking fires which not only damage the environment but cause 1.9million lives to be lost globally every year as a result of smoke inhalation. [1] Because the dam will be ‘run of the river’ there won’t be many of the usual problems associated with dams; fish will still be able to move up and down the river and much of the sediment will still be transported over the rapids.\n\n[1] Bunting, Madeleine, ‘How Hillary Clinton’s clean stoves will help African women’, theguardian.com, 21 September 2010, http://www.theguardian.com/commentisfree/cifamerica/2010/sep/21/hillary-clinton-clean-stove-initiative-africa\n", "title": "" }, { "docid": "82f9b868ec8ac80d2a911bdf93e491f9", "text": "economic policy environment climate energy water international africa house would While it is clear that such an immense project will have an impact we have little idea what that impact might be. Will the builders be local? Will the suppliers be local? It is likely that the benefit will go elsewhere just as the electricity will go to South Africa rather than providing electricity to the poverty stricken Congolese. [1]\n\n[1] Palitza, Kristin, ‘$80bn Grand Inga hydropower dam to lock out Africa’s poor’, Africa Review, 16 November 2011, www.africareview.com/Business---Finance/80-billion-dollar-Grand-Inga-dam-to-lock-out-Africa-poor/-/979184/1274126/-/kkicv7/-/index.html\n", "title": "" }, { "docid": "52806ddf0c998a8db33aa889f2c8fcf1", "text": "economic policy environment climate energy water international africa house would There is currently not enough traffic to justify such a large addition to the project. If it were worthwhile then it could be done without the need for building an immense dam.\n", "title": "" }, { "docid": "e06316d11d65e603ff65f44673b56a09", "text": "economic policy environment climate energy water international africa house would In the short to medium term during the decades the dam is being built investment will surely be concentrated in one place in this vast country; in the west where the dam is, not the east where the conflicts are. Later there is little guarantee that the government will spend the proceeds wisely to develop the country rather than it disappearing through corruption. And this assumes the money flows in from the export of electricity. To enable such exports 3000km of high voltage cable will need to be laid which would be vulnerable to being cut by rebel groups seeking to hurt the government through its wallet. [1]\n\n[1] ‘Explained: The $80 billion Grand Inga Hydropower Project’, ujuh, 21 November 2013, http://www.ujuh.co.za/explained-the-80-billion-grand-inga-hydropower-project/\n", "title": "" }, { "docid": "b536f3c65529f385375788226320e1c1", "text": "economic policy environment climate energy water international africa house would It is not the best solution to Africa’s energy crisis. According to a report by the International Energy Agency as an immense dam requires a power grid. Such a grid does not exist and building such a grid is “not proving to be cost effective in more remote rural areas”. In such low density areas local sources of power are best. [1] DRC is only 34% urban and has a population density of only 30 people per km2 [2] so the best option would be local renewable power.\n\n[1] International Energy Agency, ‘Energy for All Financing access for the poor’, World Energy Outlook, 2011, http://www.worldenergyoutlook.org/media/weowebsite/energydevelopment/weo2011_energy_for_all.pdf p.21\n\n[2] Central Intelligence Agency, ‘Congo, Democratic Republic of the’, The World Factbook, 12 November 2013, https://www.cia.gov/library/publications/the-world-factbook/geos/cg.html\n", "title": "" }, { "docid": "bf10251c62fec9989cc2d0ed4529abb5", "text": "economic policy environment climate energy water international africa house would A dam would damage the environment\n\nDams due to their generation of renewable electricity are usually seen as environmentally friendly but such mega projects are rarely without consequences. The Grand Inga would lower the oxygen content of the lower course of the river which would mean a loss of species. This would not only affect the river as the Congo’s delta is a submerged area of 300,000km2 far out into the Atlantic. This system is not yet understood but the plume transmits sediment and organic matter into the Atlantic ocean encouraging plankton offshore contributing to the Atlantic’s ability to be a carbon sink. [1]\n\n[1] Showers, Kate, ‘Will Africa’s Mega Dam Have Mega Impacts?’, International Rivers, 5 March 2012, http://www.internationalrivers.org/resources/grand-inga-will-africa%E2%80%99s-mega-dam-have-mega-impacts-1631\n", "title": "" }, { "docid": "1951f38ff1197d3feb12479bc8109939", "text": "economic policy environment climate energy water international africa house would The cost is too high\n\nThe Grand Inga is ‘pie in the sky’ as the cost is too immense. At more than $50-100 billion it is more than twice the GDP of the whole country. [1] Even the much smaller Inga III project has been plagued by funding problems with Westcor pulling out of the project in 2009. [2] This much smaller project still does not have all the financial backing it needs having failed to get firm commitments of investment from anyone except the South Africans. [3] If private companies won’t take the risk on a much smaller project they won’t on the Grand Inga.\n\n[1] Central Intelligence Agency, ‘Congo, Democratic Republic of the’, The World Factbook, 12 November 2013, https://www.cia.gov/library/publications/the-world-factbook/geos/cg.html\n\n[2] ‘Westcor Drops Grand Inga III Project’, Alternative Energy Africa, 14 August 2009, http://ae-africa.com/read_article.php?NID=1246\n\n[3] ‘DRC still looking for Inga III funding’, ESI-Africa.com, 13 September 2013, http://www.esi-africa.com/drc-still-looking-for-inga-iii-funding/\n", "title": "" }, { "docid": "291e93cb51c9cba10b25386df67a71e7", "text": "economic policy environment climate energy water international africa house would Such a big project is beyond DRC’s capacity\n\nThe Grand Inga dam project is huge while it means huge potential benefits it just makes it more difficult for the country to manage. Transparency international ranks DRC as 160th out of 176 in terms of corruption [1] so it is no surprise that projects in the country are plagued by it. [2] Such a big project would inevitably mean billions siphoned off. Even if it is built will the DRC be able to maintain it? This seems unlikely. The Inga I and II dams only operate at half their potential due to silting up and a lack of maintenance. [3]\n\n[1] ‘Corruption Perceptions Index 2012’, Transparency International, 2012, http://cpi.transparency.org/cpi2012/results/\n\n[2] Bosshard, Peter, ‘Grand Inga -- The World Bank's Latest Silver Bullet for Africa’, Huffington Post, 21 April 2013, http://www.huffingtonpost.com/peter-bosshard/grand-inga-the-world-bank_b_3308223.html\n\n[3] Vasagar, Jeevan, ‘Could a $50bn plan to tame this mighty river bring electricity to all of Africa?’, The Guardian, 25 February 2005, http://www.theguardian.com/world/2005/feb/25/congo.jeevanvasagar\n", "title": "" }, { "docid": "9543254f0a02b35552049e3cda32f1b8", "text": "economic policy environment climate energy water international africa house would An immense boost to DRC’s economy\n\nThe Grand Inga dam would be an immense boost to the DRC’s economy. It would mean a huge amount of investment coming into the country as almost all the $80 billion construction cost would be coming from outside the country which would mean thousands of workers employed and spending money in the DRC as well as boosting local suppliers. Once the project is complete the dam will provide cheap electricity so making industry more competitive and providing electricity to homes. Even the initial stages through Inga III are expected to provide electricity for 25,000 households in Kinshasa. [1]\n\n[1] ‘Movement on the Grand Inga Hydropower Project’, ujuh, 20 November 2013, http://www.ujuh.co.za/movement-on-the-grand-inga-hydropower-project/\n", "title": "" }, { "docid": "ccb931ec4f85436abb42791e19b3a274", "text": "economic policy environment climate energy water international africa house would Will enable the rebuilding of DRC\n\nDR Congo has been one of the most war ravaged countries in the world over the last two decades. The Grand Inga provides a project that can potentially benefit everyone in the country by providing cheap electricity and an economic boost. It will also provide large export earnings; to take an comparatively local example Ethiopia earns $1.5million per month exporting 60MW to Djibouti at 7 cents per KwH [1] comparable to prices in South Africa [2] so if Congo were to be exporting 500 times that (at 30,000 MW only 3/4ths of the capacity) it would be earning $9billion per year. This then will provide more money to invest and to ameliorate problems. The project can therefore be a project for the nation to rally around helping create and keep stability after the surrender of the rebel group M23 in October 2013.\n\n[1] Woldegebriel, E.G., ‘Ethiopia plans to power East Africa with hydro’, trust.org, 29 January 2013, http://www.trust.org/item/?map=ethiopia-seeks-to-power-east-africa-with-hydro\n\n[2] Burkhardt, Paul, ‘Eskom to Raise S. Africa Power Price 8% Annually for 5 Years’, Bloomberg, 28 February 2013, http://www.bloomberg.com/news/2013-02-28/south-africa-s-eskom-to-raise-power-prices-8-a-year-for-5-years.html\n", "title": "" }, { "docid": "c5a6d19e3b9760d98807ea4f1150450e", "text": "economic policy environment climate energy water international africa house would The dam would power Africa\n\nOnly 29% of Sub Saharan Africa’s population has access to electricity. [1] This has immense consequences not just for the economy as production and investment is constrained but also on society. The world bank says lack of electricity affects human rights “People cannot access modern hospital services without electricity, or feel relief from sweltering heat. Food cannot be refrigerated and businesses cannot function. Children cannot go to school… The list of deprivation goes on.” [2] Conveniently it is suggested that the “Grand Inga will thus provide more than half of the continent with renewable energy at a low price,” [3] providing electricity to half a billion people so eliminating much of this electricity gap. [4]\n\n[1] World Bank Energy, ‘Addressing the Electricity Access Gap’, World Bank, June 2010, http://siteresources.worldbank.org/EXTESC/Resources/Addressing_the_Electricity_Access_Gap.pdf p.89\n\n[2] The World Bank, ‘Energy – The Facts’, worldbank.org, 2013, http://go.worldbank.org/6ITD8WA1A0\n\n[3] SAinfo reporter, ‘SA-DRC pact paves way for Grand Inga’, SouthAfrica.info, 20 May 2013, http://www.southafrica.info/africa/grandinga-200513.htm#.UqGkNOImZI0\n\n[4] Pearce, Fred, ‘Will Huge New Hydro Projects Bring Power to Africa’s People?’, Yale Environment 360, 30 May 2013, http://e360.yale.edu/feature/will_huge_new_hydro_projects_bring_power_to_africas_people/2656/\n", "title": "" }, { "docid": "260e3f13a30f289b44a3ce422c450e81", "text": "economic policy environment climate energy water international africa house would A dam could make the Congo more usable\n\nWhile the Congo is mostly navigable it is only usable internally. The rapids cut the middle Congo off from the sea. The building of the dams could be combined with canalisation and locks to enable international goods to be easily transported to and from the interior. This would help integrate central Africa economically into the global economy making the region much more attractive for investment.\n", "title": "" } ]
arguana
c68af482a8fd3c6ec35f88291331b478
Universal healthcare stifles innovation Profits drive innovation. That’s the long and short of it. Medical care is not exception, albeit the situation is a bit more complicated in this case. The US’s current system has a marketplace of different private insurers capable of making individual and often different decisions on how and which procedures they’ll choose to cover. Their decisions are something that helps shape and drive new and different practices in hospitals. A simple example is one of virtual colonoscopies. Without getting into the nitty gritty, they often require follow up procedures, yet are very popular with patients. Some insurers value the first, some the other, but none have the power to force the health care providers to choose one or the other. They’re free to decide for themselves, innovate with guidelines, even new procedures. Those are then communicated back to insurers, influencing them in turn and completing the cycle. What introducing a single-payer universal health coverage would do is introduce a single overwhelming player into this field – the government. Since we have seen how the insurer can often shape the care, what such a monopoly does is opens up the possibility of top-down mandates as to what this care should be. With talk of “comparative effectiveness research”, tasked with finding optimal cost-effective methods of treatment, the process has already begun. [1] [1] Wall Street Journal, How Washington Rations, published 5/19/2009, http://online.wsj.com/article/SB124268737705832167.html#mod=djemEditorialPage , accessed 9/18/2011
[ { "docid": "4c981ca98f87e5caeb2334214ae9e993", "text": "finance health healthcare politics house would introduce system universal Profits do drive innovation. But there is nothing out there that would make us believes that the profits stemming from the health care industry are going to taper off or even decrease in a universal coverage system. In short in a single-payer system, it’s just the government that’ll be picking up the tab and not the private companies. But the money will still be there.\n\nAn expert on the issue from the Brigham and Women’s Hospital opined that this lack of innovation crops up every time there is talk of a health care reform, usually from the pharmaceutical industry, and usually for reasons completely unrelated to the policy proposed. [1]\n\nWhereas the opposition fears new research into efficiency of medical practice and procedures, we, on the other hand, feel that’s exactly what the doctor ordered – and doctors do too. [2]\n\n[1] Klein, E., Will Health-Care Reform Save Medical Innovation?, published 8/3/2009, http://voices.washingtonpost.com/ezra-klein/2009/08/will_health-care_reform_save_m.html , accessed 9/18/2011\n\n[2] Brown, D., ‘Comparative effectiveness research’ tackles medicine’s unanswered questions, published 8/15/2011, http://www.washingtonpost.com/national/health-science/comparative-effectiveness-research-tackles-medicines-unanswered-questions/2011/08/01/gIQA7RJSHJ_story.html , accessed 9/18/2011\n", "title": "" } ]
[ { "docid": "4770996785d3c01d2a1e43f7627395f1", "text": "finance health healthcare politics house would introduce system universal It is not, in fact, universal health care itself, that’s inefficient, but specific adaptations of it. Often, even those shortcomings are so blown out of proportion that it’s very difficult to get the whole story.\n\nUniversal health care can come in many shapes and sizes, meant to fit all kinds of countries and societies. When judging them it’s often useful to turn to those societies for critiques of their coverage systems.\n\nDespite the horror stories about the British NHS, it costs 60% less per person than the current US system. Despite the haunting depictions of decades long waiting lists, Canadians with chronic conditions are much more satisfied with the treatment received than their US counterparts. [1]\n\nWe should not let hysterical reporting to divert us from the truth – universal health care makes a lot of economic, and, more importantly, moral sense.\n\n[1] Krugman, P., The Swiss Menace, published 8/16/2009, http://www.nytimes.com/2009/08/17/opinion/17krugman.html , accessed 9/18/2011\n", "title": "" }, { "docid": "055bc399dcd9007f3e9168ff16d987a3", "text": "finance health healthcare politics house would introduce system universal We need to analyze this issue from a couple of different perspectives.\n\nThe first is this trillion per decade cost. Is this truly a cost to the American economy? We think not, since this money will simply flow back into the economy, back into the hands of health care providers, insurance companies, etc. – back into the hands of taxpayers. So in this sense it is very much affordable.\n\nBut is this a productive enterprise? For the millions of people that at this very moment have absolutely no insurance and therefore very limited access to health care, the answer is very clear.\n\nIn addition, the reform will more or less pay for itself, not in a year, not even a decade – but as it stands now, it’s been designed to have a net worth of zero. [1]\n\nLastly, just because we live in a bad economic climate doesn’t mean we can simply abandon all sense of moral obligation. There are people suffering because of the current situation. No cost can offset that.\n\n[1] Johnson, S., Kwak, J., Can We Afford Health Care Reform? We Can't Afford Not to Do It., published 9/1/2009, http://www.washingtonpost.com/wp-dyn/content/article/2009/09/01/AR2009090101027.html , accessed 9/18/2011\n", "title": "" }, { "docid": "88ec346bd32076ffa7283caafdc9d616", "text": "finance health healthcare politics house would introduce system universal A range of health programs are already available. Many employers offer health insurance and some people deliberately choose to work for such companies for these benefits, even if the pay is a little lower. Other plans can be purchased by individuals with no need to rely on an employer. This means they are free to choose the level of care which is most appropriate to their needs. For other people it can be perfectly reasonable to decide to go without health insurance. Healthy younger adults will on average save money by choosing not to pay high insurance premiums, covering any necessary treatment out of their own pockets from time to time. Why should the state take away all these people’s freedom of choice by imposing a one-size-fits-all socialist system of health care?\n\nHuman resources professionals will still be needed to deal with the very many other employment regulations put in place by the federal government. Instead of employees being able to exercise control over their health care choices and work with people in their company, patients will be forced to deal with the nameless, faceless members of the government bureaucracy.\n", "title": "" }, { "docid": "b052376db966a04a4e270c72ad3457fc", "text": "finance health healthcare politics house would introduce system universal There are several reasons why health care should not be considered a universal human right.\n\nThe first issue is one of definition – how do we define the services that need to be rendered in order for them to qualify as adequate health care? Where do we draw the line? Emergency surgery, sure, but how about cosmetic surgery?\n\nThe second is that all human rights have a clear addressee, an entity that needs to protect this right. But who is targeted here? The government? What if we opt for a private yet universal health coverage – is this any less moral? Let’s forget the institutions for a second, should this moral duty of health care fall solely on the doctors perhaps? [1]\n\nIn essence, viewing health care as a right robs us of another, much more essential one – that of the right to one’s own life and one’s livelihood. If it is not considered a service to be rendered, than how could a doctor charge for it? She couldn’t! If it were a right, than each of us would own it, it would have to be inseparable from us. Yet, we don’t and we can’t. [2]\n\nWe can see that considering health care as a basic human right has profound philosophical problems, not the least of them the fact that it infringes on the rights of others.\n\n[1] Barlow, P., Health care is not a human right, published 7/31/1999, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1126951/ , accessed 9/18/2011\n\n[2] Sade, R., The Political Fallacy that Medical Care is a Right, published 12/2/1971, http://www.aapsonline.org/brochures/sademcr.htm , accessed 9/18/2011\n", "title": "" }, { "docid": "76a6be798d32a4d28a3289746bd1234e", "text": "finance health healthcare politics house would introduce system universal The United States government cannot afford to fund universal health care. Other universal social welfare policies such as Social Security and Medicare have run into major problems with funding. Costs are rising at the same time that the baby boomer generation are growing old and retiring. Soon tens of millions of boomers will stop contributing much tax and start demanding much more in benefits than before. In such a situation we cannot afford to burden the nation with another huge government spending program. Nations that provide universal health care coverage spend a substantial amount of their national wealth on the service.\n\nWith government control of all health care, caps will be placed on costs. As a result many doctors would not be rewarded for their long hours and important roles in our lives. The road to becoming a doctor is long and hard; without the present financial rewards many young people will not choose to study medicine. Current doctors may find that they do not want to continue their careers in a government-controlled market. The American Medical Association does not back a government-controlled, single-payer universal health care system.\n\nThe current system of offering group insurance through employers covers many Americans with good quality health insurance. The group plan concept enables insurance companies to insure people who are high risk and low risk by mixing them in the same pool. Issues over losing or leaving a job with health benefits are dealt with by federal laws which require companies to continue to offer workers cover for at least 18 months after they leave employment.\n", "title": "" }, { "docid": "dc7458abbdb90745459b3f1b4c3e977e", "text": "finance health healthcare politics house would introduce system universal While the idea that better access to preventative medicine will quickly and drastically lower general medical care costs is an incredible notion, it sadly is just that – a notion.\n\nAs an aside, the same argument – lowered costs – could be made for simply improving the existing tactics of preventative medicine without the need to invest into universal coverage.\n\nReturning to this proposition though, while it might be realistic to expect some reduction in costs from improved prevention, those would very unlikely ever amount to a significant amount – and certainly not an amount that would make introducing universal health coverage a feasible strategy. [1]\n\nUniversal health care will cause people to use the health care system more. If they are covered, they will go to the doctor when they do not really need to, and will become heavy users of the system. We can see in other countries that this heavier use leads to delays in treatment and constant demands for more resources. As a result care is rationed and taxes keep going up.\n\n[1] Leonhardt, D., Free Lunch on Health? Think Again, published 8/8/2007, http://www.nytimes.com/2007/08/08/business/08leonhardt.html , accessed 9/18/2011\n", "title": "" }, { "docid": "782bf3a26c5b8c8939f70a3759a4ad4b", "text": "finance health healthcare politics house would introduce system universal Universal healthcare is not affordable\n\nNo policy is created, debated or implemented in a vacuum. The backdrop of implementing universal health coverage now is, unfortunately, the greatest economic downturn of the last 80 years. Although the National Bureau of Economic Research declared the recession to be over, we are not out of the woods yet. [1] Is it really the time to be considering a costly investment?\n\nWith estimates that the cost of this investment might reach 1.5 trillion dollars in the next decade, the answer is a resounding no. Even the Center on Budget and Policy Priorities – a left leaning think tank – opined that the Congress could not come up with the necessary funding to go ahead with the health reform without introducing some very unpopular policies. [2]\n\nDoes this mean universal health care should be introduced at one time in the future? Not likely. Given that there are no realistic policies in place to substantially reduce the “riot inducing” US public debt [3] and the trend of always increasing health care costs [4] the time when introducing universal health care affordably and responsibly will seem ever further away.\n\n[1] New York Times, Recession, published 9/20/2010, http://topics.nytimes.com/top/reference/timestopics/subjects/r/recession_and_depression/index.html , accessed 9/18/2011\n\n[2] New York Times, Paying for Universal Health Coverage, published 6/6/2009, http://www.nytimes.com/2009/06/07/opinion/07sun1.html , accessed 9/18/2011\n\n[3] Taylor, K., Bloomberg, on Radio, Raises Specter of Riots by Jobless, published 9/16/2011, http://www.nytimes.com/2011/09/17/nyregion/mayor-bloomberg-invokes-a-concern-of-riots-on-radio.html?_r=1&amp;scp=1&amp;sq=public%20debt&amp;st=cse , accessed 9/18/2011\n\n[4] Gawande, A., The cost conondrum, published 6/1/2009, http://www.newyorker.com/reporting/2009/06/01/090601fa_fact_gawande , accessed 9/18/2011\n", "title": "" }, { "docid": "2c926ef7c28cc6fe7e47d73a729ddd11", "text": "finance health healthcare politics house would introduce system universal Universal healthcare systems are inefficient\n\nOne of the countries lauded for its universal health care is France. So what has the introduction of universal coverage brought the French? Costs and waiting lists.\n\nFrance’s system of single-payer health coverage goes like this: the taxpayers fund a state insurer called Assurance Maladie, so that even patients who cannot afford treatment can get it. Now although, at face value, France spends less on healthcare and achieves better public health metrics (such as infant mortality), it has a big problem. The state insurer has been deep in debt since 1989, which has now reached 15 billion euros. [1]\n\nAnother major problem with universal health care efficiency is waiting lists. In 2006 in Britain it was reported that almost a million Britons were waiting for admission to hospitals for procedures. In Sweden the lists for heart surgery are 25 weeks long and hip replacements take a year. Very telling is a ruling by the Canadian Supreme Court, another champion of universal health care: “access to a waiting list is not access to health care”. [2]\n\nUniversal health coverage does sound nice in theory, but the dual cancers of costs and waiting lists make it a subpar option when looking for a solution to offer Americans efficient, affordable and accessible health care.\n\n[1] Gauthier-Villars, D., France Fights Universal Care's High Cost, published 8/7/2009, http://online.wsj.com/article/SB124958049241511735.html , accessed 9/17/2011\n\n[2] Tanner, M., Cannon, M., Universal healthcare's dirty little secrets, published 4/5/2007, http://www.latimes.com/news/opinion/commentary/la-oe-tanner5apr05,0,2681638.story , accessed 9/18/2011\n", "title": "" }, { "docid": "565bc576aaa40ea28984b27f2c3c9f4e", "text": "finance health healthcare politics house would introduce system universal Current health care systems are not sustainable\n\nAmerican health insurance payments are very high and rising rapidly. Even employer-subsidised programs are very expensive for many Americans, because they often require co-payments or high deductibles (payment for the first part of any treatment). In any case employee health benefits are being withdrawn by many companies as a way of cutting costs. For those without insurance, a relatively minor illness or injury can be a financial disaster. It is unfair that many ordinary hard-working Americans can no longer afford decent medical treatment.\n\nMoving to a system of universal health care would reduce the burden on human resources personnel in companies. At present they must make sure the company is obeying the very many federal laws about the provision of health insurance. With a universal system where the government was the single-payer, these regulations would not apply and the costs of American businesses would be much reduced.\n", "title": "" }, { "docid": "4b15bb95e9dcae20f4516d7bb0435f8a", "text": "finance health healthcare politics house would introduce system universal Health care programmes currently do not offer equality of care\n\nThe United States as a whole spends 14% of GDP (total income) on health care. This includes the amount spent by the federal government, state governments, employers and private citizens. Many studies have found that a single-payer system would cut costs enough to allow everyone in the USA to have access to good health care without the nation as a whole spending more than it does at the moment. Medicare, a government-run health care program, has administrative costs of less than 2% of its total budget.\n\nThe current system of health maintenance organisations (HMOs) has destroyed the doctor-patient relationship and removed patients’ ability to choose between health care providers. Patients find that their doctors are not on their new plan and are forced to leave doctors with whom they have established a trusting relationship. Also, patients must get approval to see specialists and then are allowed to see only selected doctors. Doctors usually can’t spend enough time with patients in the HMO plans. By contrast a universal health system would give patients many more choices.\n\nIn the current system the employee and the employee’s family often depend on the employer for affordable health insurance. If the worker loses their job, the cost of new health insurance can be high and is often unaffordable. Even with current federal laws making insurance more movable, the costs to the employee are too high. With a single-payer, universal health care system, health insurance would no longer be tied to the employer and employees would not have to consider health insurance as a reason to stay with a given employer. This would also be good for the economy as a whole as it would make the labour market more flexible than it has become in recent years.\n", "title": "" }, { "docid": "26b68e72a0f2167574e8a4e1529b1cf0", "text": "finance health healthcare politics house would introduce system universal Health care would substantially reduce overall costs\n\nWith universal health care, people are able to seek preventive treatment. This means having tests and check-ups before they feel ill, so that conditions can be picked up in their early stages when they are easy to treat. For example in a recent study 70% of women with health insurance knew their cholesterol level, while only 50% of uninsured women did. In the end, people who do not get preventive health care will get treatment only when their disease is more advanced. As a result their care will cost more and the outcomes are likely to be much worse. Preventative care, made more accessible, can function the same way, reducing the costs further. [1]\n\nIn addition, a single-payer system reduces the administrative costs. A different way of charging for the care, not by individual services but by outcomes, as proposed by Obama’s bill, also changes incentives from as many tests and procedures as possible to as many patients treated and healed as possible. [2]\n\nWe thus see that not only does universal health coverage inherently decrease costs because of preventative care, much of the cost can be avoided if implemented wisely and incentivized properly.\n\n[1] Cutler, D. M., Health System Modernization Will Reduce the Deficit, published 5/11/2009, http://www.americanprogressaction.org/issues/2009/05/health_modernization.html , accessed 9/17/2011\n\n[2] Wirzibicki, A., With health costs rising, Vermont moves toward a single-payer system, published 4/7/2011, http://www.boston.com/bostonglobe/editorial_opinion/blogs/the_angle/2011/04/vermonts_single.html , accessed 9/17/2011\n", "title": "" }, { "docid": "2ad42b13487e2bc12084d781f46e3c90", "text": "finance health healthcare politics house would introduce system universal Healthcare has been recognised as a right\n\nThe two crucial dimensions of the topic of introducing universal health care are morality and the affordability.\n\nParagraph 1 of Article 25 of the Universal Declaration of Human Rights states the following: “Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.” [1]\n\nAnalyzing the text, we see that medical care, in so far, as it provides adequate health and well-being is considered a human right by the international community. In addition, it also states, that this right extends also to periods of unemployment, sickness, disability, and so forth.\n\nDespite this, why should we consider health care a human right? Because health is an essential prerequisite for a functional individual – one that is capable of free expression for instance – and a functional society – one capable of holding elections, not hampered by communicable diseases, to point to just one example.\n\nUniversal health care provided by the state to all its citizens is the only form of health care that can provide what is outlined in the Declaration.\n\nIn the US the only conditions truly universally covered are medical emergencies. [2] But life without the immediate danger of death hardly constitutes an adequate standard of health and well-being. Additionally, programs such as Medicaid and Medicare do the same, yet again, only for certain parts of the population, not really providing the necessary care for the entire society.\n\nFurther, the current system of health care actively removes health insurance from the unemployed, since most (61%) of Americans are insured through their employers – thus not respecting the provision that demands care also in the case of unemployment. [3]\n\nBut does insurance equal health care? In a word: yes. Given the incredible cost of modern and sophisticated medical care – a colonoscopy can cost more than 3000 dollars – in practice, those who are not insured are also not treated. [4]\n\n[1] UN General Assembly, Universal Declaration of Human Rights, published 12/10/1948, http://www.amnestyusa.org/research/human-rights-basics/universal-declaration-of-human-rights , accessed 9/17/2011\n\n[2] Barrett, M., The US Universal Health Care System-Emergency Rooms, published 3/2/2009, http://www.huffingtonpost.com/marilyn-barrett/the-us-universal-health-c_b_171010.html , accessed 9/17/2011\n\n[3] Smith, D., U.S. healthcare law seen aiding employer coverage, published 6/21/2011, http://www.reuters.com/article/2011/06/21/usa-healthcare-employers-idUSN1E75J1WP20110621 , accessed 9/17/2011\n\n[4] Mantone, J., Even With Insurance, Hospital Stay Can Cost a Million, published 11/29/2007, http://blogs.wsj.com/health/2007/11/29/even-with-insurance-hospital-stay-can-cost-a-million/ , accessed 9/17/2011\n", "title": "" } ]
arguana
292927b4188834c7284d09234f3c340d
Free trade promotes global efficiency through specialization. Operating at maximum productivity is one of the most important aspects of an efficient economy. The right resources and technology must be combined to produce the right amount of goods to be sold for the right price. Therefore all markets should strive for highest efficiency. In order to maximize efficiency in the international economy, countries need to utilize their comparative advantage. This means producing what you are best at making, compared to other countries. If Mary is the best carpenter and lawyer in the US, but makes more money being a lawyer, she should devote more of her time to law and pay someone for her carpentry needs. Mary has an absolute advantage in law and carpentry, but someone else has a comparative advantage in carpentry1. Comparatively it makes more sense for someone else to do the carpentry, and for Mary to be the lawyer. It is the same in the international economy. Countries can be more efficient and productive if they produce what they are best at based on their domestic resources and populations, and trade for other goods. This promotes efficiency and lower prices. Free trade enhances this. The Doha round that is currently being debated in the World Trade Organization would reduce trade barriers and promote free trade, economies of scale, and efficient production of goods. It is estimated that the Doha round would increase the global GDP by $150 billion alone just by promoting free trade2. Free trade leads to specialization and efficient production, which ultimately would increase the size of the global economy and the individual economies in it. 1 Library of Economics and Liberty, "Comparative Advantage", 2 Meltzer, Joshua (2011), "The Future of Trade", Foreign Policy Magazine,
[ { "docid": "9082c2c83b13442fd588841679c1505c", "text": "finance economy general house believes global free trade For countries that are dependent on their resources and lack developed industries, free trade does not promote efficiency. Free trade makes them overly dependent on their resources, which other countries are coming in and buying. This is because their domestic industries cannot compete with those of the developed world, so they have difficulty fostering sectors besides raw goods. They are forced to rely on supplying materials, rather than being able to build innovative industries. That does not offer efficiency, it just suppresses economies. For example Nigeria is dependent on oil for 95% of foreign exchange earnings and 80% of their budget money1. Trading oil is not making it a more diversified, sophisticated economy. 1 CIA World Fact Book, \"Nigeria\", CIA, http://en.wikipedia.org/wiki/Comparative_advantage\n", "title": "" } ]
[ { "docid": "5f7bbc53be651a4989cdf2303cf781aa", "text": "finance economy general house believes global free trade Although free trade may promote innovation and growth, because of issues like dumping (where rich countries sell their products very cheaply in poorer countries and make it impossible for local industry to compete), or jobs being exported to places where labor is cheaper, free trade has significant costs and does not necessarily foster benefits for all. It is necessary to grow infant industries and create jobs, and free trade hurts both.\n", "title": "" }, { "docid": "0bf17f716ae8bba9a952ea27ae08c74b", "text": "finance economy general house believes global free trade Therefore, there is no empirical evidence that proves that poverty is reduced. If countries removed all agricultural subsidies domestic production would decrease and world food prices would increase. Poor countries that import food will suffer from increased food prices due to trade liberalization. 45 of the least-developed countries on earth imported more food than they exported in 1999, so there are many countries that could be severely harmed by increasing food prices1.\n\n1 Panagariya, Arvind (2003), \"Think Again: International Trade\", Foreign Policy Magazine,\n", "title": "" }, { "docid": "dbccbb1b5aae6d38cd59c5653ccc9861", "text": "finance economy general house believes global free trade Free trade is the economic policy that many liberal countries—who are less likely to go to war with each other—have chosen. It’s not the policy that makes them liberal. These studies show such a strong correlation, because the countries that have chosen free trade are largely a huge block of countries that already get along, particularly the EU countries and the US. These countries already have the productive relationships necessary for peace. And history has shown that those relationships can be fostered without resorting to free trade. For example, for many years after World War II, Japan protected many national industries, but it was a peaceful country with a productive relationship with the West. Therefore, the costs of free trade are not necessary to achieve that benefit since it can be fostered under different conditions.\n\n1 Paul W. Kuznets, “An East Asian Model of Economic Development: Japan, Taiwan, and South Korea,” Economic Development and Cultural Change, vol. 35, no. 3 (April 1988)\n", "title": "" }, { "docid": "c05698e700f8dedbde8d6752ba273457", "text": "finance economy general house believes global free trade Marian Tupy of the Center for Global Liberty and Propensity states, \"In the history of the world, no country has ever suffered military defeat, or capitulated to sanctions, due to the inability to produce a domestically producible product\"1. Globalization also means there are many partners to trade with, so even if a country is at war there are plenty of options of other countries from which to buy necessary products. 1 The Industrial College of the Armed Forces (2008), \"Industry Study\", National Defense University,\n", "title": "" }, { "docid": "b5107a204aee1b4ec20e99cc8044d812", "text": "finance economy general house believes global free trade Opening up in FTAs is the first step towards liberalization in the larger sense and opening up to all free trade, so it should not be considered a failure. Additionally, free trade needs to balance international and domestic goals so coming to an agreement is difficult, but the WTO has been successful in the past. The current problems with the Doha round do not spell the end to the WTO or free trade1. 1 Meltzer, Joshua (2011), \"The Future of Trade\", Foreign Policy Magazine,\n", "title": "" }, { "docid": "3d9f7c1b24ce3c4d8b7125fb329073f4", "text": "finance economy general house believes global free trade Even with tariffs the steel industry in losing jobs. Nothing can save steel. It simply does not operate as effectively as other global steel industries. Further, protectionism helps a small group of workers, the rest of American industry that is dependent on steel for their operation is hurt by high prices and inefficient production1. Protectionism puts the good of the few above the rest. Additionally, the WTO was created to ensure that dumping does not happen. The problem with infant industry is it's hard to determine when to start the transition away from protectionism, and often it never develops fully. For example, Brazil protected its computer industry and it never was able to compete even past the infant industry stage2.\n\n1 Lindsey, Brink and Griswold, Daniel T. (1999), \"Steel Quotas Will Harm US\", CATO Institute,\n\n2 Luzio, Eduardo and Greenstein, Shane (1995), \"Measuring the Performance of a Protected Infant Industry: The Case of Brazilian Microcomputers\", Review of Economics and Statistics,\n", "title": "" }, { "docid": "04d53e4f17234f37c988cb3a4cb0ba46", "text": "finance economy general house believes global free trade Sweatshops are unfortunate, but free trade can benefit from cheap labor without relying on exploiting workers. Economically, cheap labor is a step in the right direction for poor countries and their people. Making 60 cents an hour in a factory that exports goods is better than 30 cents an hour working in the field, trying to feed a family in Indonesia1. Paul Krugman explains that sweatshops allow the poor to get jobs, and manufacturing development has a ripple effect on the rest of the economy and its development. Taiwan and South Korea, and even the US, went through this type of industrial development and it is better than the alternative, which is failed farming or dependence on aid1. If workers are being exploited—which is different from being paid low wages that are actually good by the standards of the country—then that should be regulated by governments, but that in no way infringes upon free trade.\n\n1 Krugman , Paul (1997), “In Praise of Cheap Labor”, Slate.com\n", "title": "" }, { "docid": "593669a48339a3547614595713ed69ce", "text": "finance economy general house believes global free trade Free trade reduces poverty.\n\nFree trade reduces poverty for two reasons. First, it creates direct \"pull up\" as Columbia economist, Jagdish Bhagwati calls it because it creates demand for a country's good and industry and thus employs the poor and expands jobs1. Additionally it creates more revenue for government that can be directly targeted towards anti-poverty programs. Independent research Xavier Sala-i-Martin at Columbia University estimates that poverty has been reduced by 50 million people in the developing world during the era of free trade, since 19871. Hong Kong, Singapore, South Korea, and Taiwan have been liberalizing trade for the past 40 years and have not suffered from one-dollar-per-day poverty in the last 20 years1. If agricultural subsidies were removed from developed countries, food would become more expensive as there would be fewer producers, and poor farmers would have a better shot at competing and making a living. Free trade promotes the necessary monetary flow and demand for goods to increase jobs and sustainably grow an economy to reduce poverty. Prices are lower, more products are available, and the poor are able to achieve a higher standard of living.\n\n1 Panagariya, Arvind (2003), \"Think Again: International Trade\", Foreign Policy Magazine,\n", "title": "" }, { "docid": "11cc868f4b90f1841923fd76a07281ab", "text": "finance economy general house believes global free trade Free trade creates substantial cooperative relationships between trading partners.\n\nThere has long been a debate as to whether aid or trade is more effective in promoting development and cooperative relationships. Being interlocked through trading relationships decreases the likelihood of war. If you are engaged in a mutually beneficial relationship with other countries, then there is no incentive to jeopardize this relationship through aggression. It leads to more cooperative relationships because trading partners have incentives to consider the concerns of their trade partners since their economic health is at stake. This promotes peace, which is universal good. In 1996, Thomas Friedman famously pointed out that no two countries with a McDonalds—a sign of western liberal economic policies—have ever gone to war together.1 Academic studies have shown that this is specifically a result of free trade. In 2006 Solomon Polachek of SUNY Binghamton and Carlos Seiglie of Rutgers found that the last 30 years have shown that economic freedom is 50 times more likely to reduce violence between countries than democracy2. Erik Gartzke of Columbia University rated countries’ economic freedom on a scale of 1 (least free) to 10 (most free). He analyzed military conflicts between 1816 and 2000 and found that countries with a 2 or less on the economic freedom scale were 14 times more likely to be involved in armed conflicts than those with an 8 or higher2. Aside from war, free trade also solidifies countries’ alliances. For example, the US wants to begin a free trade relationship with South Korea to create a concrete partnership that will ultimately lead to greater cooperation3. Free trade promotes global connections and peace and therefore is a beneficial force.\n\n1 Thomas Friedman, “Foreign Affairs Big Mac,” New York Times, December 8, 1996\n\n2 http://www.csmonitor.com/2006/1120/p09s02-coop.html/ (page)/2\n\n3 http://www.nytimes.com/2010/11/11/world/asia/11prexy.html\n", "title": "" }, { "docid": "2569d00740b8817995a82377e04fc680", "text": "finance economy general house believes global free trade Free trade promotes growth in all countries.\n\nThrough global competition, specialization, and access to technology, free trade and openness allow countries to grow faster—India and China started in the 1980s with restrictive trade policies, but as they have liberalized they have also improved their growth enormously1. The International Trade Commission estimates that a free trade agreement between just Colombia and the US would increase the US GDP by $2.5 billion2. When industries have to compete with competition around the world, they are pushed towards innovation and efficiency. Entrepreneurs are more productive if they have to compete. Free trade increases access to technology which also increases overall development. Because of free trade, prices are lower for everyone. Trade offers benefits to both developed and developing nations by encouraging competition, efficiency, lower prices, and opening up new markets to tap into.\n\n1 Panagariya , Arvind (2003), “Think Again: International Trade”, Foreign Policy Magazine\n\n2 White House (2010), “Benefits of US-Colombia Trade Promotion Agreement”\n", "title": "" }, { "docid": "249c8c7d186f08b2b88e3639716e58cc", "text": "finance economy general house believes global free trade Free trade hurts the world's poor\n\nFree trade creates demand for extremely cheap products produced by poor people in terrible conditions in third world countries. In Indonesia, there are people working in sweatshops for 60 cents an hour1. It is estimated that there are 158 million child workers around the world2. Free trade creates demand for the products produced by this modern day form of child and adult slavery. The governments of the countries where this takes place do nothing to improve the working conditions. Sweatshops are produced by free trade and demand for cheap goods, and the way that workers are treated is inherently wrong. Therefore free trade is not a force for global betterment, but instead hurts the cause of the poor and their standard of living.\n\n1 Krugman, Paul (1997), \"In Praise of Cheap Labor\", Slate.com,\n\n2 UNICEF, \"Child Labor\",\n", "title": "" }, { "docid": "e009d74581849e248ca450104c393078", "text": "finance economy general house believes global free trade Implementing true free trade is unfeasible because it is unreasonable\n\nAn increasing number of countries are looking to bilateral Free Trade Agreements that will help them specifically. They are not directly open to free trade with all countries. These FTAs are undermining the position of the World Trade Organization which is meant to push countries towards economic liberalization1. Countries have no reason to start trading freely with everyone, if they already have FTAs with the most beneficial trading partners. The Doha round seeks to reduce trade barriers in industry and agriculture has been going on for ten years, but there is still no agreement. Disputes are becoming more common when it comes to trade. In 2009, there was a dispute over the US putting tariffs on Chinese tires that has created tension in the trade relationship between those two countries2. Considering that the WTO countries have been debating the Doha round for ten years, it is unreasonable to think that countries are going to adopt free trade policies with the whole world. It is much more likely they will concede to bilateral free trade agreements that specifically help themselves. Since it is unlikely for free trade to become a universal policy it is not beneficial for all countries.\n\n1 Meltzer, Joshua (2011), \"The Future of Trade\", Foreign Policy Magazine,\n\n2 Bradsher, Keith (2009), \"China-US Trade Dispute Has Broad Implications\", .\n", "title": "" }, { "docid": "98ea5a39302e50190df2b6ff5c8f16d8", "text": "finance economy general house believes global free trade Free trade jeopardizes countries' security.\n\nIf a country goes to war with one of its trading partners, it needs to have the capacity to produce all of the necessary tools for war domestically, and not depend on other countries for supplies and parts. Additionally there is fear that disease-causing agents and bioterrorism can enter countries through the trade of poorly inspected food1. For reasons of national security it makes sense to retain the capacity to produce what is necessary to win a war and to protect the domestic population. This is one of the reasons why countries—such as the US1—like to protect their agricultural industry. Free trade is a threat to global security. For countries to stay safe, they need to retain some protectionism in their international trade policy.\n\n1 George W. Bush, “Homeland Security Presidential Directive 9: Defense of United States Agriculture and Food,” U.S. Department of Homeland Security, accessed July 15, 2011\n", "title": "" }, { "docid": "0169183429d75df83112747ed33a97dc", "text": "finance economy general house believes global free trade It is just to protect industry and jobs.\n\nWhen countries dump their products in other markets without barriers, they undercut the ability for local industries to compete. If those local industries try to compete, large foreign or multinational companies can use extremely low predatory pricing to make it impossible for the smaller industries to break into the market. The fully developed industries in rich countries are almost impossible for poorer, still developing economies to compete with. If they are not given the chance and have to compete with large international industries from the beginning, domestic industry in poor countries will have a hard time. The overall economic development of the country will thus be inhibited1. Additionally, competition can cost jobs, as industries become less profitable and labor is outsourced, so there is reason to retain protectionism as countries put their economic health first. For example, America has long protected its steel industry, as in 2002 when it adopted a controversial 40% tariff, because it was thought that competition put 600,000 jobs at risk2. Since 1977, 350,000 steel jobs have been shed, so these tariffs are justified3. Countries should put their economies and jobs first and therefore protectionism is warranted.\n\n1 Suranovic, Steven, \"The Infant Industry Argument and Dynamic Comparative Advantage\", International Economics,\n\n2 http://www.commondreams.org/views02/0307-05.htm \"&gt; Flanders, Lauren (2002), \"Unfair Trade\", CommonDreams.org,\n\n3 Wypijewski, JoAnn (2002), \"Whose Steel?\", The Nation,\n", "title": "" } ]
arguana
d6255861aa5c4ca9c8f33440e4c22fce
There should be rewards for success in school, versus punishment for failure to attend. This problem could be addressed by subsidizing school supplies or rewarding good attendance records with additional cash. Cutting benefits will only hurt the children we are trying to help, with their families deprived of the resources to feed them or care for them. Free breakfast programs in the US feed 10.1 million children every day1. Providing meals, mentors, programs that support and help students are ways to help them get along better in schools. There are already 14 million children in the US that go hungry, and 600 million children worldwide that are living on less than a dollar a day2. Why punish those families that have trouble putting their kids in school, which only hurts those children more? There should be rewards for good grades, and reduction to the cost of school and above all programs so that children don't have to sit in school hungry and confused. 1 United States Department of Agriculture, "The School Breakfast Program",[Accessed July 21, 2011]. 2 Feeding America (2010), "Hunger in America: Key Facts", [Accessed July 21, 2011]. and UNICEF, "Goal: Eradicate extreme poverty and hunger", [Accessed July 21, 2011].
[ { "docid": "4e3d28240b9bc087e4294d1ffa72e506", "text": "economic policy education education general house believes payment welfare There is nothing that says the two are mutually exclusive. Linking welfare to school attendance could be instituted next to other reforms that overall would create greater incentives for children to do well in school.\n", "title": "" } ]
[ { "docid": "da921af8f373abd343440c80807abdf1", "text": "economic policy education education general house believes payment welfare It is perfectly just to ask people to adjust behavior in exchange for funds. In fact, if the tax payers' dollars were being poured into an unchanging situation that would be unfair and unproductive. For a long time the US, and countries around the world, have struggled with making welfare a program that can lift people up. Connecting it to schools can help children.\n", "title": "" }, { "docid": "8c5d5f882c8db0a1c8ea77b7169d76b4", "text": "economic policy education education general house believes payment welfare Yet if kids aren't going to school anyway it doesn't matter if the schools are inadequate. Getting kids in schools is the first step to improving the education situation and the dropout rate. As long as we look at the education system in the US and around the world as dismal and overwhelming, nothing will change.\n", "title": "" }, { "docid": "f08b7f8d13c43cd4fec76a570610bc6a", "text": "economic policy education education general house believes payment welfare If families have incentives to send their children to school, and raise their children with a value of education, stressing the need for them to go to school they are more likely to finish high school and lift themselves out of these environments. The reason why some children would rather work then go to school is because they have been raised in an atmosphere that does not stress education and the necessity to finish high school. This type of program would push parents to change their children's values as they grow up. Additionally, a child's sense of duty to their family because of welfare payments being connected to their school attendance would give them further reason not to drop out, even if they do not like or value school.\n", "title": "" }, { "docid": "b72777de145c9cb189a9615ca1ade02b", "text": "economic policy education education general house believes payment welfare If school is so expensive, than shouldn't the government be subsidizing school costs instead of forcing parents to send kids to school when they can't afford the books and clothes? It is also unfair to assume that parents on welfare on neglectful and do not value education. Supporting meal programs in schools and subsidizing other costs are much more likely to draw children than forcing parents to send children to school when the kids are hungry and embarrassed1. 1 United States Department of Agriculture, \"The School Breakfast Program\",[Accessed July 21, 2011].\n", "title": "" }, { "docid": "10a01ce627b53d2dca2a7309adbb5546", "text": "economic policy education education general house believes payment welfare Just because students attend school does not mean that they are going to receive a quality education. The best educated children are those whose parents are involved heavily in their school, helping them with their homework, and pushing them to excel1. Without involved parents, students can become just as easily discouraged. There really need to be programs to involve parents more in school, and provide good mentors and role models for students who don't have them. Schools also need to be improved. Just sending kids to school doesn't mean that they are going to learn and be determined to better themselves. Additionally particularly in the third world if children don't have good schools and qualified teachers, then what is the point of going to school? 1 Chavkin, Nancy, and Williams, David (1989), \"Low-Income Parents' Attitudes toward Parent Involvemet in Education\", Social Welfare, [Accessed July 21, 2011].\n", "title": "" }, { "docid": "1c82ae6debe1e587c1fa84b2a40a65ba", "text": "economic policy education education general house believes payment welfare The purpose of welfare is not to better society per se; it is to support those who have fallen into bad times and need extra help. Expecting people to render a service in exchange for help is demeaning and it undermines the purpose of welfare which is to help people get back on their feet versus tell them what they have to do to be considered beneficial to society.\n", "title": "" }, { "docid": "b8f3a273ac6918e7ddaff3869d326c9d", "text": "economic policy education education general house believes payment welfare But the program in Brazil is biased towards rural communities versus cities. In the two largest cities in only 10% of families are enrolled versus 41% in the rural areas of Brazil [1] . To consider the program effective it needs to work equally with all members of the poor, which it does not.\n\n[1] 'How to get children out of jobs and into school', The Economist​, 29 July 2010, http://www.economist.com/node/16690887\n", "title": "" }, { "docid": "48542b6b0e455114266cd13a26c2ea58", "text": "economic policy education education general house believes payment welfare Connecting welfare to failure of parents is unfair.\n\nThis policy requires that parents be held accountable and punished for the actions of their children. It suggests that their failure in instilling good values is because they care less than middle-class, educated parents. That is a broad and stereotypical assumption. Such parents, many of whom are single mothers, find it harder to instill good values in their children because they live in corrupt environments, surrounded by negative influences[1]. They should be aided and supported, not punished for an alleged failure. Just encouraging putting children in schools does not recognize the larger problems. Some families cannot control their children, who would rather make money than go to school. And caps on the number of children these programs can apply to, as is the case in Brazil, creates problems as well for the families[2]. People are doing their best, but the environment is difficult. Providing safer and more low income housing could be a solution versus punishing people for what is sometimes out of their control. 1 Cawthorne, Alexandra (2008), \"The Straight Facts on Women in Poverty\", Center for American Progress, [Accessed July 21, 2011]. 2\n", "title": "" }, { "docid": "592666c310364c9c494b76e36e106b67", "text": "economic policy education education general house believes payment welfare It is unjust to make welfare conditional\n\nWelfare should not be used as a tool of social engineering. These are people who cannot provide even basic necessities for their families. Asking them to take on obligations by threatening to take away their food is not requiring them to be responsible, it's extortion. It is not treating them as stakeholders and equal partners in a discussion about benefits and responsibilities, but trying to condition them into doing what the rest of society thinks is good for them and their families. There is a difference between an incentive and coercion. An incentive functions on the premise that the person targeted is able to refuse it. These people have no meaningful choice between 'the incentive' or going hungry. This policy does not respect people's basic dignity. There is no condition attached to healthcare and Medicaid that says people have to eat healthily or stop smoking, so why should welfare be conditional? Allowing them and their children to go without food if they refuse is callous. Making welfare conditional is taking advantage of people's situation and telling them what they need to do to be considered valuable to society; it is inherently wrong. It impedes on people's rights to free choice and demeans them as worthless.\n", "title": "" }, { "docid": "bd9ea503531a40c6689f0d6a8ea3cb43", "text": "economic policy education education general house believes payment welfare School does not an education make\n\nSchool attendance is not a positive outcome in and of itself. It should be encouraged only if it is conducive to learning and acquiring the meaningful education needed to break out of the poverty trap. Blaming the poverty cycle on kids failing to attend school ignores the fact that schools are failing children. Public schools are often overcrowded, with poor facilities and lacking the resources necessary to teach children with challenging backgrounds. In 2011, 80% of America's schools could be considered failing according to Arne Duncan who is the secretary of education1. Schools in developing countries often lack qualified teachers, and can suffer from very high staff absenteeism rates2. A more effective school system would result in fewer kids dropping out, not the other way around. Additionally, involved parents are integral to effective education3. Simply blackmailing them with money to do the right thing will not work. In fact, you might actually experience backlash from parents and kids, who'll see school as a burdensome requirement that is met just so you can keep the electricity on. Throwing kids into school where they do not have confidence, support, and the necessary facilities is not productive. 1 Dillon , Sam (2011), \"Most Public Schools May Miss Targets, Education Secretary Says\", New York Times, [Accessed July 21, 2011]. 2 World Bank, \"Facts about Primary Education\",[Accessed July 21, 2011]. 3 Chavkin , Nancy, and Williams, David (1989), \"Low-Income Parents' Attitudes toward Parent Involvemet in Education\", Social Welfare, [Accessed July 21, 2011].\n", "title": "" }, { "docid": "0be0f7b406795d33e78421fd7abe8a07", "text": "economic policy education education general house believes payment welfare Parents on welfare are more likely to need the incentives to take on the costs of sending children to school.\n\nParents on welfare benefits are the most likely to need the extra inducements. They generally tend to be less educated and oftentimes be less appreciative of the long-term value of education. In the late 90's, 42% of people on welfare had less than a high school education, and another 42% had finished high school, but had not attended college in the US. Therefore they need the additional and more tangible, financial reasons to send their children to school. Children living in poverty in the US are 6.8 times more likely to have experienced child abuse and neglect1. While attendance might not be a sufficient condition for academic success, it is certainly a necessary one, and the very first step toward it. Some parents might be tempted to look at the short-term costs and benefits. Sending a child to school might be an opportunity cost for the parents as lost labor inside or outside the homes (especially in the third world) the household, or as an actual cost, as paying for things like supplies, uniforms or transportation can be expensive. Around the world there are an estimated 158 million working children, who often need to work to contribute to their family's livelihood2. In the UK it is estimated that sending a child to public school costs up to 1,200 pounds a year. If they lose money by not sending children to school, this would tilt the cost-benefits balance in favor of school attendance. 1 Duncan, Greg and Brooks-Gunn, Jeanne (2000), \"Family Poverty, Welfare Reform, and Child Development\", Child Development, [Accessed July 21, 2011] 2 http://www.unicef.org/protection/index_childlabour.html [Accessed July 13, 2011].\n", "title": "" }, { "docid": "7844cd4c50f7216026dbb9b896f4f9f2", "text": "economic policy education education general house believes payment welfare It is morally acceptable to make welfare conditional.\n\nWhen society has to step in and provide for those who've proved themselves unable to provide for themselves that should reasonably create certain expectations on the part of those being helped. In almost every aspect of life, money is given in return for a product, service or behavior. It is the same with welfare payments; money in exchange for children being put in school. We expect parents to do a good job in their role as parents. Ensuring that their children attend school is a crucial part of parental responsibility. Children on welfare in the US are 2 times more likely to drop out of school, however studies have shown that children who are part of early childhood education are more likely to finish school and remain independent of welfare1. Thus, when a parent is a welfare recipient, it is entirely reasonable to make it conditional on sending their kids to school. If tax payers' dollars are being spent on those who cannot provide for themselves, there needs to be a societal return. One of the greatest complaints about welfare is that people work hard for the money that they earn, which is then handed to others with no direct benefit to society. If children of people on welfare are in school it increases the likelihood that they will finish high school, maybe get a scholarship and go to college, and have the necessary tools to contribute to the work force and better society.\n\n1 Heckman, James (2000), \"Invest in the Very Young\", Ounce of Prevention and the University of Chicago, [Accessed July 25, 2011]. and Duncan, Greg and Brooks-Gunn, Jeanne (2000), \"Family Poverty, Welfare Reform, and Child Development\", Child Development, [Accessed July 21, 2011]\n", "title": "" }, { "docid": "90d823131bef74ae1d583261d4741704", "text": "economic policy education education general house believes payment welfare The policy has been effective in the past\n\nThe main goal of this program is increasing school enrollment overall. If it was too much to expect from families, then the program would have failed in the cases that it was instituted. However, the opposite has been the case. 12.4 million families in Brazil are enrolled in a program called Bolsa Familia where children’s attendance in school is rewarded with $12 a month per child. The number of Brazilians with incomes below $440 a month has decreased by 8% year since 2003, and 1/6 of the poverty reduction in the country is attributed to this program [1] . Additionally it is much less expensive than other programs, costing only about .5% of the country’s GDP [2] . Considering that this program has been affordable and successful in both reducing poverty and increasing school enrollment it is worth using as an incentive in more programs around the world.\n\n[1] 'How to get children out of jobs and into school', The Economist, 29 July 2010, http://www.economist.com/node/16690887\n\n[2] 'How to get children out of jobs and into school', The Economist​, 29 July 2010, http://www.economist.com/node/16690887\n", "title": "" }, { "docid": "e6c333c74b39f28420183fc884d5e817", "text": "economic policy education education general house believes payment welfare Requiring school attendance allows welfare to be the hand-up that it is meant to be, and keep children out of crime.\n\nIn the US, girls who grow up in families receiving welfare handouts are 3 times more likely to receive welfare themselves within three years of having their first child than girls who's families were never on welfare1. Children living in poverty were 2 times more likely to have grade repetition and drop out of high school and 3.1 times more likely to have children out of wedlock as teenagers2. They are 2.2 times more likely to experience violent crimes. Children of welfare recipients are more likely to end up on welfare themselves. Welfare should be a hand up, not a handout that leads to dependency on the state. It is the latter if we are only leading people to fall into the same trap as their parents. Education is the way to break the vicious cycle. Through education, children will acquire the skills and qualifications they need in order to obtain gainful employment once they reach adulthood, and overcome their condition. In the developing world, primary education has proven to reduce AIDS incidences, improve health, increase productivity and contribute to economic growth3. School can empower children, and give them guidance and hope that they may not receive at home. Getting kids in school is the first step to equipping them with the skills to better their situations, and if encouraged by their parents they might consider scholarships to college or vocational school. The program does not guarantee this for all, but it is likely more effective than the leaving parents with no incentive to push their children. Benefits are supposed to promote the welfare of both parents and children. One of the best ways to ensure that welfare payments are actually benefiting children is to make sure they're going to school. This is simply providing parents with an extra incentive to do the right thing for their children and become more vested in their kids' education. 1 Family Facts, \"A Closer Look at Welfare\", [Accessed July 21, 2011]. 2 Duncan , Greg and Brooks-Gunn, Jeanne (2000), \"Family Poverty, Welfare Reform, and Child Development\", Child Development, [Accessed July 21, 2011] 3http World Bank, \"Facts about Primary Education\",[Accessed July 21, 2011].\n", "title": "" } ]
arguana
a556dfdc85f7b30db89e867da6336bea
Workfare does not help people get jobs Workfare schemes are of little use if there are no jobs out there for people to do. The evidence suggests that ‘the vast majority of unemployment – over 9-10ths – has nothing to do with people not wanting work, and everything to do with a lack of demand for labour’1. As such, with few jobs on offer, it is of little use to demand welfare recipients come in for work, rather than search harder and deeper for the few jobs that are available. Regardless, often the skills which employers are really demanding are specialised and at a high level, which menial make-work tasks are unlikely to provide the unemployed with. It would be far better to invest in proper education and training schemes instead. In 2003, 60 per cent of New York’s welfare recipients did not have high school diplomas; if they want this majority to find jobs, they should be paying for them to go back to school, not clean streets2. 1 Dillow , C. (2010, November 8). Small Truths, Big Errors. Retrieved July 19, 2011, from Stumbling and Mumbling 2 New York Times. (2003, April 15). The Mayor's Mistake on Workfare. Retrieved July 19, 2011, from The New York Times
[ { "docid": "9b168a4aaebca3970e4af44e16f232f0", "text": "employment house believes unemployed should be made work their welfare money Workfare does help people to get jobs by increasing the perception amongst employers that the unemployed nevertheless have the potential to be productive citizens – they’re willing and able to work, and have gained skills from being in a working environment. This counters one of the key barriers to employment, which is the prioritisation of younger generations who have not been tarred with the brush of having had to claim benefits. Furthermore, many schemes allow welfare recipients to satisfy work requirements by counting class rime, work-study jobs and internships – therefore, if education is what is felt to be missing, Workfare does not discourage participants from going back to school1.\n\n1 New York Times. (2003, April 15). The Mayor's Mistake on Workfare. Retrieved July 19, 2011, from The New York Times\n", "title": "" } ]
[ { "docid": "62de662bba68ec16aafcd7066906701b", "text": "employment house believes unemployed should be made work their welfare money Workfare schemes are an investment in people. Spending money on workfare schemes is an investment in people, who gain the opportunity to lift themselves out of poverty, and the economy, which benefits from a better supply of labour. Although such schemes might cost more per person than just handing out dole money for doing nothing, their ability to deter fraudulent claimants makes them cheaper overall. Their success in moving the unemployed into real jobs also benefits the government and the wider economy, through taxation and increased consumer spending.\n", "title": "" }, { "docid": "54fc8b944e0d2948b87bcc9168e52d79", "text": "employment house believes unemployed should be made work their welfare money Workfare allows people to demonstrate both to themselves and others that a day at work will not always result in failure. This greatly benefits the self-esteem of many, who have become trapped in unemployment because their past experiences (perhaps beginning with unsuccessful schooldays) have lead them to believe that they cannot be useful and successful when doing a day at work. Workfare demonstrates that to be false by allowing them to work in a job where they can see the results of their labour, and not lose out (indeed, gain benefits) as a result.\n", "title": "" }, { "docid": "f41438b3e2035ce0fc783f7a703e4058", "text": "employment house believes unemployed should be made work their welfare money Workfare projects can be designed so as not to displace low-paid jobs: Often workfare schemes are limited to non-profit organisations deliberately in order to avoid a negative impact upon the local job market. In any case, many workers on very low pay only do such work for a relatively short time before finding better jobs elsewhere, so this is not a rigid sector of the labour force, liable to be destroyed by workfare.\n", "title": "" }, { "docid": "4878ffc41cd546ce3a1df62df3e7c15a", "text": "employment house believes unemployed should be made work their welfare money The number of people defrauding the system is very small (only 0.007% of the total cost of the benefit system). The majority of people on benefits are seeking work. They will be hindered in so doing, because instead of applying for work, attending interviews and developing relevant skills they will be forced to attend their workfare scheme. Thus, people will remain on benefits for longer, costing the government more in the long term.\n", "title": "" }, { "docid": "9f2bf4f1013a95492069d0a4c9171073", "text": "employment house believes unemployed should be made work their welfare money Workfare does not break the dependency culture. People do not seek unemployment and dependency on the state. No one voluntarily seeks to live on the very low income provided by state benefits, instead people become unemployed through no fault of their own; workfare stigmatises them as lazy and needing to be forced into work by state coercion. The schemes ignore the talents and ambitions of those involved, typically using them for menial tasks and manual labour that teach them no useful skills\n", "title": "" }, { "docid": "8ece7e60ea444d0e53bcab05ee9db996", "text": "employment house believes unemployed should be made work their welfare money Workfares have low standards that produce poor and potentially unsafe products. Individuals forced into workfare schemes lack incentives to work to a high standard, and may be actively disaffected. The work they do is therefore unlikely to benefit anyone much and raises a number of safety issues: would you drive across a bridge built by workfare labour? Would you trust your aged parent or pre-school child to a workfare carer? Would you trust them with any job that required the handling of money? Given these constraints, it is clear that the government may be unable to find enough worthwhile things for their forced labourers to do.\n", "title": "" }, { "docid": "1ea3d2f8de13b2841ffbee00cbd1a0e9", "text": "employment house believes unemployed should be made work their welfare money Workfare schemes are of little use if there are no jobs out there for people to do– something which is an issue of wider economic management. Often the skills which employers are really demanding are literacy, numeracy and familiarity with modern information technology, which menial make-work tasks are unlikely to provide the unemployed with. Far better to invest in proper education and training schemes instead. Even if such skills might be developed through workfare schemes, will forcing people into such work really mean they get the benefits? Most of the long-term unemployed are older, made redundant from declining industries; they do not lack skills but suffer instead from ageist prejudices among employers. Finally, if the ‘workfare’ jobs that unemployed people are being forced into are real jobs that need doing, then they should simply be employed to do them in the normal way (either by the state or by private companies)\n", "title": "" }, { "docid": "4518d431387113f9a192200dac62a526", "text": "employment house believes unemployed should be made work their welfare money Workfare schemes limit the opportunities to look for work\n\nPutting the unemployed into workfare schemes actually limits their opportunities to look for work, by making them show up for make-work schemes when they could be job hunting. Even if the numbers of those claiming unemployment benefit are reduced by the threat of such a scheme, that does not necessarily remove them from welfare rolls – they may, for example, be pushed into claiming other benefits, such as disability allowances. Others may prefer to turn to crime for income rather than be forced into workfare projects that don’t pay enough to be an attractive option. The evidence of the Workfare program in Argentina suggests that the policy has little positive effect on finding jobs for participants; ‘for a large fraction of participants, the program generated dependency and did not increase their human capital’1.\n\n1 Ronconi, L., Sanguinetti, J., Fachelli, S., Casazza, V., &amp; Franceschelli, I. (2006, June).Poverty and Employability Effects of Workfare Programs in Argentina. Retrieved July 19, 2011, from PEP\n", "title": "" }, { "docid": "6e61e3ca531d4f9d34e2dc837a744a06", "text": "employment house believes unemployed should be made work their welfare money Workfare is more expensive than traditional benefits\n\nWorkfare is actually a more expensive option than traditional unemployment benefit. The jobless are ultimately given at least the same amount of taxpayers' money but the state also has to pay the costs of setting up the schemes, paying for materials, the wages of supervisors, transport and childcare costs, etc. In a recession, when the numbers of the unemployed rise substantially, the costs of workfare schemes could be prohibitive and lead to the collapse of the policy. Furthermore, even if the state wanted to, they couldn't enrol everyone– ‘given that most people who lose a job find another within six months, there’s no point dragging people into these schemes who will find work anyway given a little more time’1.\n\n1 Saunders , P. (2011, July 1). Those who can work must not be paid to sit at home.Retrieved July 19, 2011, from The Australian\n", "title": "" }, { "docid": "83469ba94f77f1d5a447a08f9fe37248", "text": "employment house believes unemployed should be made work their welfare money Workfare will damage the existing labour market\n\nWorkfare harms those already in employment but on very low pay, because their menial jobs are the kind of labour that workfare projects will provide. Why should a local authority pay people to pick up litter or lay paving, if workfare teams can be made to do it for much less? If low-paid jobs are displaced, the ultimate result may be higher unemployment. In New York, public employee unions actively opposed Workfare specifically because they feared it would put public employees out of work1. Even if workfare projects are limited to labour for charities and non-profit groups, they discourage active citizenship and volunteerism as the state is assuming responsibility for these initiatives.\n\n1 Kaus, M. (2000, April 16). Now She's Done It. Retrieved July 19, 2011, from Slate\n", "title": "" }, { "docid": "4f48b21e396b22834e45a0bc36e31079", "text": "employment house believes unemployed should be made work their welfare money Workfare will eliminate scroungers, who are a financial drain on the system\n\nMaking the unemployed work for their welfare benefits calls the bluff of those claiming benefit but not really looking for jobs. Such scroungers include the incurably lazy, those who are defrauding the taxpayer by claiming welfare while holding down a paying job, and those who are working in the black economy. Furthermore, workfare schemes require applicants also search for work whilst completing the scheme1. Moving from a traditional something-for-nothing welfare scheme to a workfare system stops all these individuals from being a burden on the state, cutting welfare rolls very rapidly and allowing the government to concentrate upon assisting the truly needy.\n\n1: Kaus, M. (2000, April 16). Now She's Done It. Retrieved July 19, 2011, from Slate\n", "title": "" }, { "docid": "32144e7eaae3f5c054681f048df62c0a", "text": "employment house believes unemployed should be made work their welfare money Workfare schemes benefit society\n\nSociety also benefits from the work done by those on workfare schemes: These might include environmental improvement in local communities, service to assist the elderly and disabled, and work for charities or local authorities. In many cases the labour they provide would not have been available in any other way, so the addition they make to everyone's quality of life is a welcome bonus to the scheme. Furthermore, a 2011 study in Denmark found a 'strong and significant crime reducing effect of the workfare policy.'1\n\n1: Fallesen, P., Geerdsen, L., Imai, S., &amp; Tranaes, T. (2011, March 1). The Effect of Workfare Policy on Crime. Retrieved July 19, 2011\n", "title": "" }, { "docid": "634f989b08f40717dceed2f7fbabea87", "text": "employment house believes unemployed should be made work their welfare money Workfare provides skills to allow the unemployed to work their way out of poverty\n\nWorkfares offer the unemployed opportunities to develop skills to work their way out of poverty. Productive work raises the expectations of those involved by increasing their self-respect and provides them with more confidence in their abilities. It also develops skills associated with work, such as time keeping, taking and giving instructions, working in a team, accepting responsibility and prioritising. Such skills may seem mundane but they are very valuable to employers and their absence among the long-term unemployed is a key reason why they find it so hard to gain jobs. Individuals who are currently working are also more attractive to potential employers than those who are unemployed, especially the long-term unemployed. The evidence suggests Workfare is a success; studies of Workfare in Maryland found that 75 per cent of those who left welfare had earnings within 2.5 years1 .1: Kaus, M. (2000, April 16). Now She's Done It. Retrieved July 19, 2011, from Slate\n", "title": "" }, { "docid": "e538da9d6128a978b22c63c41abe1507", "text": "employment house believes unemployed should be made work their welfare money Workfare breaks the dependency culture\n\nMaking the unemployed work for their welfare money positively breaks the dependency culture. Receiving unemployment benefit for doing nothing makes individuals too reliant on the state and encourages apathy and laziness; this is particularly true of the long-term unemployed and of those who have never had a paying job since leaving school. As President Clinton said regarding welfare reform, 'the goal is to break the culture of poverty and dependence'. Tying welfare money to productive work challenges these something-for-nothing assumptions and shows that the state has a right to ask for something in return for the generosity of its taxpayers. In New York, workfare pays slightly less than the minimum wage, preserving the incentive for the unemployed to use workfare as a stepping stone into a better-paid, long-term job1.\n\n1: Kaus, M. (2000, April 16). Now She's Done It. Retrieved July 19, 2011, from Slate\n", "title": "" } ]
arguana
c62cc6d3a7c541c029e4bbdbed8bd686
Systemic aid' is detrimental to African society While aid threatens the economy, it also poses hazards for society in Africa. As Moyo contends, it merely fosters civil war as people fight over scarce resources that cannot feasibly be equally distributed. According to Dr Napoleoni, $1.6bn of $1.8bn in aid received by Ethiopia in 1982 – 1985 was invested in military equipment1. As a result aid is often limited; some donors refuse to make payments unless a proportion is devoted to a specified cause or if some act is done in return. Moyo refers George Bush’s demand that two thirds of his $15bn donation towards AIDs must go to pro-abstinence schemes. Such requirements further impede Africa’s ability to create a domestic policy and think for itself. Aid is solely to blame for its dependent state. 1 Herrick, L. (2008, May 14). Money raised for Africa 'goes to civil wars'. Retrieved July 20, 2011, from New Zealand Herald
[ { "docid": "adce1e1c216e5c55f43b934eee5899e7", "text": "finance economy general international politics politics general house prefers Resources will only be scarcer without aid; further chaos and corruption will ensue. There would be no need for fighting should resources be shared out equally. If aid is transferred to governments there is surely a centralized method of doing so; aid itself is not the problem. Africa could escape the issue of receiving payments according to donors’ vested interests by administering a list of causes for which it desires support, accepting contributions where demands fall exclusively within its categories. Again, aid is not detrimental but its careless distribution and allocation is.\n", "title": "" } ]
[ { "docid": "20e956e4f9660a9f8c5b7b1627eda73f", "text": "finance economy general international politics politics general house prefers The opportunities for trade are severely limited because of barriers imposed by the international system. The arguments made by pro-trade proponents are often couched in the rhetoric of market economics. Yet the international trade arena represents anything but a free market. Instead, tariffs, taxes, subsidies, regulations and other restrictions operate to disadvantage some countries. Because of their weaker bargaining and economic power, it is typically developing not developed countries that are on the losing end of this equation. The agricultural protectionism of the EU and USA, in particular, means that developing countries are unable to compete fairly.\n\nFurthermore, even if we were to accept that trade is more important, they should not be seen as alternatives; they can readily be complements. Trade is not inevitably magic and aid is not inevitably damaging. They depend on complementary policies. For example, aid-for-infrastructure programs that encourage trade could enable African exporters to compete with their Asian competitors 1.\n\n1. UNIDO, Industrial Development Report, 2009.\n", "title": "" }, { "docid": "fa0ebb538114afc796b9ee0096c2b9e9", "text": "finance economy general international politics politics general house prefers Trade can be as short term as aid is; demand is very cyclical so if a country specializes in providing that good or service it can soon find that the product they are providing is no longer desired by consumers, or that there is a new product that makes what they provide obsolete. Even if there is a long term partnership between two trading partners it may simply mean tying the poor country into a different kind of dependency. Instead of the poor country being dependent upon handouts it is dependent upon the richer country buying its products or not trying to undercut it.\n", "title": "" }, { "docid": "850aae458becc73e9749629ca5c39957", "text": "finance economy general international politics politics general house prefers While aid appears unsuccessful for Africa, the approach itself should not be criticized on the basis of results in one continent. Western countries have simply provided African countries with generous payments allowing them to stabilize their economy. It many aspects of life, emphasis is not often attributed to what resources are available but how they are used. Though more guidance on how to invest the money may have been useful, Africa itself must take responsibility for how it has spent the money. The evil behind aid is allegedly overreliance: a country becomes dependent on receiving more and more aid. However, a focused approach to budget and organization of capital could certainly put aid to good use.\n", "title": "" }, { "docid": "2d82dbf155d1d4eb94a5a003b0800a5a", "text": "finance economy general international politics politics general house prefers All countries have something to trade. Many of the world’s poorest countries have a lot of natural resources so they can take part in trade. Even if a country does not have sufficient natural resources it still has people. In order to be able to take part in the globalized manufacturing industry it need only be willing to accept lower wages than its rivals. Alternatively if it is landlocked and has not opportunity to trade in manufactures it can invest in education in order to become a services hub. All states have a comparative advantage somewhere, they just need to find it.\n", "title": "" }, { "docid": "60935020a5d1c61e168e7a07abd0a9f3", "text": "finance economy general international politics politics general house prefers This argument borders on the absurd. Trade is much more likely to yield benefits for the ordinary men and women of Africa, than aid ever hoped to be. Aid and its unregulated flow are precisely what kept numerous dictators in power (Zimbabwe’s Mugabe, to name but one) allowing them to starve their people while taking weekend trips to the Ivory Coast in private jets. Trade, on the contrary, creates jobs, and those jobs create demand for other jobs - which is what matters to the ordinary person.\n", "title": "" }, { "docid": "39b37c937c08ba30798b440ec186fe5a", "text": "finance economy general international politics politics general house prefers Aid money is often misspent, even when handled honestly. By imposing solutions from outside, it favors big projects, \"grand gestures\" and centralization - all of which may be inappropriate, only benefit a small number of people, and suffer from intended consequences. By contrast, the profits of trade trickle down to the whole population, giving people the power to spend additional income as they choose, for example by reinvesting it in worthwhile local industries and enterprises.\n", "title": "" }, { "docid": "88f8430e72b90c3bc89986f79e980d47", "text": "finance economy general international politics politics general house prefers Yes, trade might require infrastructure, but Asian countries required it just as much, maybe more than the African ones do. As Moyo argues in “Dead Aid” all of this is to be achieved not by clinging to aid, but by creating a stable enough atmosphere with favorable terms for FDI. The Chinese have already invested billions of dollars in Africa and are likely to invest much more. That way, the African countries get both trade and infrastructure, without being at the mercy of developed nations.\n", "title": "" }, { "docid": "3bb8eea633fbd5a6b8c330f2850228ce", "text": "finance economy general international politics politics general house prefers Even if that were true, people naturally want to trade with each other, seeking to turn their particular resources or skills to their advantage. All too often trade is limited not because government action is needed, but because the government actually gets in the way with restrictive rules and statist controls. For example, regardless of their terms of trade with developed nations, developing countries could all become more prosperous if they removed the barriers they have erected to trade with each other. Putting the emphasis on trade rather than aid redirects attention from what developed states should or could be doing for the developing world, to what developing countries can and should do for themselves.\n", "title": "" }, { "docid": "d4264520df44348da3f7c804744d7060", "text": "finance economy general international politics politics general house prefers Trade provides developing countries with an important basis for their own improvement.\n\nTo gear up to be successful trading partners, developing countries often need to go through a number of key changes. As well as developing their own economy and their manufacturing or service sectors, they may need to build trade infrastructure in other ways. For example, increased trade would focus their attention on such things as good governance, the benefits of a broadly stable currency and internal security. Although such developments may come about as a facilitator for trade, in the best case scenario they may be seen as structural changes which will have a trickle-down benefit for the broader society in the underdeveloped country. China for example has reformed its agriculture, created a large manufacturing sector and is increasingly moving into high tech sectors as a result of trading with, particularly exporting to, the rich world and as a result has lifted more than 600 million people out of poverty between 1981 and 2004 1.\n\n1 The World Bank, 'Results Profile: China Poverty Reduction', 19 March 2010, Retrieved 2 September 2011 from worldbank.org:\n", "title": "" }, { "docid": "d3adc65a21bcaab1a8b7b9b627fc2d5f", "text": "finance economy general international politics politics general house prefers Trade is a long-term basis for international co-operation.\n\nWhereas aid is mostly short term, particularly for individual projects or limited to the donors priorities, the other partner in a trading relationship is likely to represent an ongoing market for goods or services. So when a developing country has the capacity to engage in trade with another country, there is a strong likelihood that that trade will blossom into an ongoing trading partnership. This will allow a firm basis for a flow of cash or goods into the developing country, largely independently of whether the developed country is doing well or badly economically at a given moment. This can be contrasted to the flow of aid. It tends to be less predictable, both because it is manipulated for political reasons and also because it can be quite ephemeral and so, if the developed country goes through a bad economic time, the aid budget makes an easy target for a reduction in spending as is shown by the arguments in the United States where the USAID Administrator Shah \"We estimate, and I believe these are very conservative estimates, that H.R. 1[bill passed by republicans in the house cutting foreign spending] would lead to 70,000 kids dying,\"1.European trade with Africa may have decreased, but China’s demand for oil and raw materials is blossoming, and Africa is becoming a major supplier 2.\n\n1 Rogin, Josh, 'Shah: GOP budget would kill 70,000 children', foreignpolicy.com, 31 March 2011, Retrieved 1 September 2011 from Foreign Policy\n\n2 Moyo, D. (2009, March 21). Why Foreign Aid is Hurting Africa. Retrieved July 21, 2011, from The Wall Street Journal:\n", "title": "" }, { "docid": "718f40c973af8e758b18e22301f0f7a1", "text": "finance economy general international politics politics general house prefers Financial contributions from the West have proved detrimental for Africa.\n\nBetween 1970 and 1998 when aid was at its peak, poverty rose alarmingly from 11% to 66%. This statistic alone suggests aid is damaging to African welfare. Africa began borrowing money in the 1970s when interest rates were low, but a rising rates in 1979 caused 11 African countries to default. Even after restructuring, they fell deeper into debt. While the Marshall Plan had been a success, the same approach would not favor Africa; as Dambisa Moyo contends, it lacks the required institutions to utilize capital efficiently. Debt servicing meant money was passing from the poor to the rich, leaving Africa in a precarious global position. Furthermore, countries which have rejected aid as an approach to combat poverty have prospered, indicating an additional correlation between aid and a ruined economy 1.\n\n1 Edemariam, A. (2009, February 19). 'Everybody knows it doesn't work'. Retrieved July 20, 2011, from The Guardian:\n", "title": "" }, { "docid": "241eda2ff9d8d537d3356608000de4c3", "text": "finance economy general international politics politics general house prefers The global economy is not welcoming to African players\n\nThe international trade arena represents anything but a free market. Instead, tariffs, taxes, subsidies, regulations and other restrictions operate to disadvantage some countries. Because of their weaker bargaining and economic power, it is typically developing not developed countries that are on the losing end of this equation. The agricultural protectionism of the EU and USA, in particular, means that developing countries are unable to compete fairly. In the EU, for example, each cow gets over 12 USD every day, which is many times more than what the average Sub-Saharan person lives on 1. Furthermore, Africa has yet to break into the global market for manufactured exports: this is very difficult precisely because of the success of low-income Asia.\n\n1 BBC News. (2008, November 20). Q&amp;A: Common Agricultural Policy. Retrieved July 21, 2011, from BBC News:\n", "title": "" }, { "docid": "c6edf73f3d86035ff9e5ac8345f7f0f6", "text": "finance economy general international politics politics general house prefers Trade requires infrastructure\n\nTrade does not exist in a vacuum. It needs a wider infrastructure to support it, e.g. roads, railways, ports, education to produce capable civil servants to administer trading rules, etc. For example Malawi as a landlocked country needs roads and railways to link it to ports in neighboring Angola and Mozambique. Without foreign aid, developing countries are not able to develop this kind of support, and so cannot participate effectively in international trade.\n\nThis is even more the case when it comes to creating the necessary legal infrastructure and effective civil service. Aid is not always in the form of money - it may also be given through expert advisors who help countries prepare for the challenges of globalization. Such were the efforts in the 1960s by the developing world, but they were dropped in favor of poverty relief. If restarted and restructured, they would yield much better results, without the fear of commodity prices dropping, enabling African countries to eventually stand on their own two feet. Corruption is a potentially huge problem as recognized by Sudan People’s Liberation Movement Secretary General Pagan Amum \"We will have a new government with no experience at governing. Our institutions are weak or absent. There will be high expectations. Hundreds of millions of dollars of oil money will be coming our way, as well as inflows of foreign aid. It's a recipe for corruption.1\" As a result it is not physical infrastructure that is needed but rather mechanisms for preventing corruption. Something that aid will always be much better at achieving than trade.\n\n1 Klitgaard, Robert, 'Making a Country', ForeignPolicy.com, 7 January 2011, Retrieved 2 September 2011 from ForeignPolicy.com\n", "title": "" }, { "docid": "2691b5ab0ac2193a357ae848e4de70a7", "text": "finance economy general international politics politics general house prefers Trade does not allocate resources effectively\n\nAid allows for money in a given country to be allocated well against need. At the micro- level as well as the macro, trade is an inefficient distributor of resources in a developing country. Under it, most if not all of the benefit of the trade will stay with a small elite of people who are often amongst the richest in the country in the first place. They may then move the money offshore again. Alternatively, if it remains within the developing country, it may well simply be used to buttress their own position in a way which further entrenches their social and economic position. So, the benefits of trade flow to few people and often they are the least needy. Aid, by contrast, may be targeted against specifically identified groups or areas on the basis of need, often being given through local groups, such as churches, mosques, health clinics, etc. If one looks at the Gini index (income and wealth equality) ranking, it is plain that the top (most inequality) is occupied by Sub-Saharan countries, fortifying the point 1.\n\n1 Mongabay. (2010, January 25). Distribution of Family Income. Retrieved July 21, 2011, from Mongabay:\n", "title": "" }, { "docid": "19d6a9b9e2d618671d0c6661ab9cb986", "text": "finance economy general international politics politics general house prefers Trade may not help those most in need.\n\nAid is linked to need. Trade rewards those who are able and willing to engage in trade. This involves a number of elements – as well as having the rights sorts and quantity of goods and services and being willing to sell at the desired price, a country may need to meet certain other criteria of a purchasing country. For example, that country may make demands in terms of corruption, human rights, political support at the United Nations, or any other of a large number of possible preconditions for a trading partnership. This will suit some countries in the developing world. But for others it will act as a bar to trade. They will therefore not receive the redistribution of wealth that is claimed for the global trading web. In this way, trade can distribute its benefits very unevenly. By contrast, aid can in theory be more evenly distributed and can be targeted against identified need rather than against the ability to compete in a trading marketplace. While aid has not always been targeted effectively and has sometimes been wasted there have been efforts to increase accountability and coordinate aid better such as the Paris Declaration on Aid Effectiveness 1.\n\n1 Development Co-operation Directorate, 'Paris Declaration and Accra Agenda for Action', OECD, Retrieved 2 September 2011 from oecd.org:\n", "title": "" }, { "docid": "6135c4ab1f8a8695a4837689c8efa483", "text": "finance economy general international politics politics general house prefers Free trade is dangerous\n\nExposing fragile developing economies to free trade is very risky. There is a short-term danger that a flood of cheap (because of developed world subsidies) imports will wreck local industries that are unable to compete fairly. For example China’s dominance in textile manufacturers has reduced the amount African countries can export to the US and Europe and is causing protests in Zimbabwe and South Africa against cheap imported Chinese clothing. 1\n\nIn the longer term economies are likely to become dangerously dependent upon \"cash crops\" or other commodities produced solely for export (e.g. rubber, coffee, cocoa, copper, zinc), rather than becoming self-sufficient. Such economies are very vulnerable to big swings on the international commodity markets, and can quickly be wrecked by changes in supply and demand. For illustration, one only needs to look at Greenfield’s “Free market-free fall” 2. He writes: “Trade liberalization encouraged increased production, leading to overproduction that pushed down prices, driving down farmers’ incomes…” Combined with the protectionism of the West (the CAP in the EU) trade is dangerous for Africa. Aid is more stable and certain, and is better for frail countries.\n\n1Africapractice, 'The Impact of the Chinese Presence in Africa', 26 April 2007, retrieved 1 September 2011 from David and Associates\n\n2Greenfield, G. (n.d.). Free Market Free Fall. Retrieved July 21, 2011, from UNCTAD:\n", "title": "" } ]
arguana
bfa5465590429ec1adc161b49ff6bfde
Keeping funds from government has negative consequences for spending Let us not forget that in most of the cases when we talk about oil revenues, we are talking about very large sums of money, which can have an immense impact on the budget. In countries where oil already contributes to the budget any change could be immensely disruptive to the government’s ability to deliver services. If we take Venezuela as an example oil revenues account for 25% of GDP (1), with government expenditure of 50% of GDP (2) any drop in oil revenues would have an immense impact upon social policies such as education, health and welfare. For those where the funding would be new that country would be foregoing a potentially transformative sum of money that could help to eliminate poverty or provide universal healthcare and education. Such a drop in funds flowing into the government would also have a huge impact on politics; politicians would block the implementation of a proposal that takes away so much revenue. If it did happen the independent fund would simply get criticism heaped on it as an excuse for why services can’t be improved. (1) Annual Statistical Bulletin 2013, ‘Venezuela facts and figures’, OPEC, 2013, http://www.opec.org/opec_web/en/about_us/171.htm (2) 2013 Index of Economic Freedom, ‘Venezuela’, Heritage Foundation, 2013, http://www.heritage.org/index/country/venezuela
[ { "docid": "cd94fb7677dd9bf9765181fd6b8f7fc7", "text": "economic policy international africa government house would put taxesrevenue oil The change need not be dramatic; it need not apply to all oil revenues at once. For example only revenues from new fields could go into the independent fund while existing revenues to the government are maintained. Services therefore won’t need to undergo contraction.\n\nThe impact on politics would also be minor; people elect those who get things done not those who blame others for their problems. Moreover all of the politicians will have the same constraint of a lack of funds so no single party will have an unfair advantage.\n", "title": "" } ]
[ { "docid": "00f2b8849c6725d065733336c3bc40b2", "text": "economic policy international africa government house would put taxesrevenue oil Politicians only think about themselves and only for the short term looking for re-election. The result will be the money used for populist measures even if it is not sustainable. The example of Greece proves this idea, as there public sector wages rose 50% between 1999 and 2007, despite having a deficit (1). Everyone wants more money, so will vote for such measures. They don’t think about the question of how that money will be acquired in the long run so will go for unsustainable policies that kick the problem to future generations. Only an independent body will be immune to short-termism.\n\n(1) ‘Eurozone crisis explained’, BBC News, 27 November 2012, http://www.bbc.co.uk/news/business-13798000\n", "title": "" }, { "docid": "dae40ae01250775a92e6876b931f4f63", "text": "economic policy international africa government house would put taxesrevenue oil Is it better that money should be wasted immediately or should the return be spread out? Any prudent population would choose the latter. Most populations are wary of untrammelled exploitation of natural resources of the kind being promoted for fear of the devastating environmental impact.\n\nRecent failures of big companies to protect the environment, like Chevron(1), only add to this discontent and lack of trust. The case of Rosia Montana Gold Company which wants to get a permit to mine for gold in Romania is also very illustrative. Following the request of this company to exploit certain mountainous areas in the Carpathian, a series of nation-wide protests have emerged. Thousands of people from across the nation are going out on the streets on a weekly basis to protest against this project.(2)\n\nAn independent fund won’t disincentivise investment; money will still be returned to the nation’s treasury to be used by politicians but because it takes longer to flow into the treasury there is less incentive for reckless investment that disregards the people’s will.\n\n(1) “Chevron's Toxic Legacy in Ecuador”, Rainforest Action Network, http://ran.org/chevrons-toxic-legacy-ecuador\n\n(2) Vlad Ursulean “Stopping Europe's biggest gold mine”, Al Jazeera, 27 Nov 2013 http://www.aljazeera.com/indepth/features/2013/11/stopping-europe-biggest-gold-mine-20131117102859516331.html\n", "title": "" }, { "docid": "4a6371240ae2e1d3d9e2b3bed1af9b8d", "text": "economic policy international africa government house would put taxesrevenue oil This is based on several potentially faulty assumptions first the trust fund may not be aimed at helping to prevent pollution of clean up afterwards; it may simply be given the role of generating the biggest possible return. Second it assumes that politicians see themselves as tied to the people so that they have a reason to prevent pollution, in practice in an autocracy or a faulty democracy this may not be the case. The desire may therefore be to invest as much money as possible in the trust fund and therefore to exploit the resource as fully and cheaply as possible. Even if the money is going into a trust fund the self interest is in polluting as we should remember that dictators are likely to believe they will still be around to see the benefits in decades to come.\n", "title": "" }, { "docid": "d7a7611ccd0b8123980f4ef175c4204e", "text": "economic policy international africa government house would put taxesrevenue oil Having oil does not just provide the money to undermine, or prevent democracy taking hold; it also provides an immense source for corruption. Oil revenues provide a revenue stream that is not dependent on the people but simply upon the global market and oil production. In a country with no checks and balances, accountability or transparency the money will inevitably go to the elite. This is how Equatorial Guinea can be rich while having most of the population in poverty. Dictator Obiang himself is worth an estimated $700million or the equivalent of about 4% of GDP.(1)\n\nA trust fund can ensure that money from oil goes to the poorest not the richest. It is managed outside the country and away from political pressure. If the government is corrupt and uses the national budget to its own ends the trust fund can provide the dividends as investment in individual development projects to ensure the money is used where it is most needed. All the time it can be transparent to show when and where the government is trying to influence it or get backhanders.\n\n(1) ‘The Richest World Leaders Are Even Richer Than You Thought’, Huffington Post, 29 November 2013, http://www.huffingtonpost.com/2013/11/29/richest-world-leaders_n_4178514.html?utm_hp_ref=tw\n", "title": "" }, { "docid": "35e05b93c27c2e8cb4792d9beb5649f3", "text": "economic policy international africa government house would put taxesrevenue oil Not all politicians are incapable of investing for the long term. After the economic crisis in which the world saw the perils of “living in the moment”, politicians will be more cautious in the way they spend money. Politicians have in the past been able to build visionary projects such as the EU, or high speed rail, or invest in reducing greenhouse gas emissions; in Europe, domestic greenhouse gas emissions fell by over 15 % between 1990 and 2010, due also to improvements in energy and fuel efficiency, so there is no reason to think they could not do so again.(1)\n\nAs a result, we do not need a separate group for taking these decisions for the politicians, as they would do it by themselves.\n\n(1) European Environment Agency, “Mixed success for European environmental policies”, Spiral, 2012 http://www.spiral-project.eu/content/mixed-success-european-environmental-policies\n", "title": "" }, { "docid": "13f49bda7c08667e4326dad90d5c9668", "text": "economic policy international africa government house would put taxesrevenue oil The biggest problem African countries face is instability whether from rebellions, coups, international conflicts, or terrorist organisation. The inevitable result is violence. What the population needs is safety to enable social benefits like healthcare and education. Money to pay for an army can therefore be a good thing. A good well paid professional force is needed to ensure stability and prevent conflict. Nigeria for example would surely have split apart without a large army; violence from terrorist groups like Boko Haram is increasing creating Muslim-Christian tensions.(1) Without stability there can be no democracy; votes can’t be held, so financing for stability is a good thing.\n\nEgypt is a good example that shows a well-trained army can work for the benefit of democracy; it first stood aside while the people overthrew Egyptian dictator Mubarak and then stepped in when it was believed Morsi threatened democracy.\n\n(1) “Nigeria’s troubles ,Getting worse”, The Economist, Jul 14th 2012 http://www.economist.com/node/21558593\n\n(2) Siddique, Haroon, ‘Egypt army was ‘restoring democracy’, claims Kerry’, theguardian.com, 2 August 2013, http://www.theguardian.com/world/2013/aug/02/egypt-army-restoring-democracy-kerry\n", "title": "" }, { "docid": "9db8a2705128574aeb58bb5ad942039e", "text": "economic policy international africa government house would put taxesrevenue oil An independent trust fund discourages investment.\n\nWhen it is politicians who control both the investment and the amount funds being returned from that investment then they have an incentive to encourage more investment. They will want more exploration to find more resources, they will promote technological advances to be able to extract more from the same fields, and they will be willing to grant more production licences.\n\nIf on the other hand the money goes into a trust fund then the government and parliament has little incentive to encourage the market and every incentive to hold it up. The oil only provides a risk; unpopularity due to environmental impacts without any benefit in return.\n\nThe result will be that the costs of drilling will be seen in the environmental damage it causes while communities do not get any of the benefit as the money is being squirreled away ‘for the future’. This is hindering the market and so reducing the economic benefits to the country.\n", "title": "" }, { "docid": "67978c9ae62193349c7ff27b2d8db126", "text": "economic policy international africa government house would put taxesrevenue oil For the people and accountable to the people\n\nA country’s resources should be used democratically. The resources that are found under the soil belong to the nation and therefore they should be used for the benefit of the people. Even where there is private ownership extending to mineral and energy resources it is the responsibility of the owners to use those resources for the good of the nation. The only way for this to happen is if there is a democratically accountable body in charge of the funding; this has to mean a democratic parliament.\n\nPutting the money in an ‘independent fund’ is not very accountable. Even if it is independent there is no saying what the money will be used for, or that the fund is not really designed to funnel money back to a few individuals.\n", "title": "" }, { "docid": "88609b9b6d169c7121a576e963f284e7", "text": "economic policy international africa government house would put taxesrevenue oil A fund would prevent pollution\n\nEnvironmental damage is an example of the ‘tragedy of the commons’ where if a resource is not owned by an individual (or is free to all) then it will be overexploited. This is because it is in everyone’s self-interest to use it as much as possible. The result is pollution; politicians and oil companies want to exploit the oil as cheaply as possible so they dump pollution on the local population.\n\nFor example, the $19 billion ruling handed down last year by a court in Lago Agrio, a town near Ecuador’s border with Colombia, held Chevron accountable for health and environmental damages resulting from chemical-laden wastewater dumped from 1964 to 1992(1).\n\nPutting oil wealth into a trust fund can help prevent this kind of abuse. There are two reasons for this. First if politicians are not getting an immediate benefit they will be less inclined to overlook pollution and there won’t be money to buy support for drilling and pollution to continue. The second is that since the fund is meant to provide long term benefits and investments one of the things it can be doing is being devoted to cleaning up any pollution that is created thus protecting the future generations.\n\n(1) Joe Carroll, Rebecca Penty &amp; Katia Dmitrieva ” Chevron’s $19 Billion ‘Disaster’ Gets Hearing”, Bloomberg, 29 November 2012, http://www.bloomberg.com/news/2012-11-29/chevron-s-19-billion-disaster-gets-hearing-corporate-canada.html\n", "title": "" }, { "docid": "f65933a932fcdf62ebfc873136c4e91c", "text": "economic policy international africa government house would put taxesrevenue oil Long term benefits\n\nIt is very tempting to recklessly use an unexpected windfall of money immediately. But the best thing to do is to invest for the long term either to build infrastructure that will pay back its cost in future economic growth, or to invest it in funds that will continue paying dividends long into the future. The example of how Britain and Norway spent their North Sea oil revenues is very revealing: “the British governments spent their North Sea winnings on cutting national borrowing and keeping down taxes. Whatever came in went straight into the day-to-day budget. By contrast, for the past 16 years Norway has squirreled away the government's petroleum revenue in a national oil fund”(1) which now has $810 billion in assets, almost twice the country’s GDP, providing 5% returns.(2)\n\nThe advantage of such investment is that they will continue to bring income even after the oil is gone. The oil will therefore benefit future generations as well as the current one. A panel of experts which are immune to political influence is the most likely body to think about long-term needs of the country and devise a plan which can ultimately bring income for a long period of time.\n\n(1) Simon Gompertz “Has the UK squandered its North Sea riches?” , BBC News , 8 October 2012 http://www.bbc.co.uk/news/business-19871411\n\n(2) Jonas Bergman, “World’s Biggest Wealth Fund Says Record Size Is Posing Hurdles”, Bloomberg, 1 November 2013, http://www.bloomberg.com/news/2013-11-01/world-s-biggest-wealth-fund-says-record-size-is-posing-hurdles.html\n", "title": "" }, { "docid": "c4438974caf8e4a02fafca4b0fbbb940", "text": "economic policy international africa government house would put taxesrevenue oil Oil wealth flowing to politicians discourages democracy\n\nThe wealth from oil, or other natural resources, holds back democratization as a result of the “resources curse” or “paradox of plenty”. Resources provide money, and money is what is needed to run a security state. When money can come from natural resources there is little need to tax the people, instead it becomes a “rentier” economy where the dictator has resources to buy support without recourse to taxation. [1] It is essentially the opposite of the well-known idea ‘no taxation without representation’; if the money comes not from taxes but from oil what need is there for democracy?\n\nThis proposal takes away the option of having access to large oil revenues instead providing only a limited amount to the state rather than the pockets of the dictator. This prevents the buying of key groups such as the army and the policy who can be used to repress the population. It is not by chance that the only countries in the Arab Middle East that could be considered democracies before the Arab Spring never had oil; Jordan and Lebanon.\n\n[1] Michel Chatelus and Yves Scehmeil, ‘Towards a New Political Economy of State Industrialisation in the Arab Middle East’, International Journal of Middle East Studies, Vol. 16, No. 2 (May, 1984), pp.251-265, pp.261-262\n", "title": "" }, { "docid": "828c68571ab6b913465f44fa52cc6a83", "text": "economic policy international africa government house would put taxesrevenue oil Preventing Corruption\n\nHaving oil does not just provide the money to undermine, or prevent democracy taking hold; it also provides an immense source for corruption. Oil revenues provide a revenue stream that is not dependent on the people but simply upon the global market and oil production. In a country with no checks and balances, accountability or transparency the money will inevitably go to the elite. This is how Equatorial Guinea can be rich while having most of the population in poverty. Dictator Obiang himself is worth an estimated $700million or the equivalent of about 4% of GDP.(1)\n\nA trust fund can ensure that money from oil goes to the poorest not the richest. It is managed outside the country and away from political pressure. If the government is corrupt and uses the national budget to its own ends the trust fund can provide the dividends as investment in individual development projects to ensure the money is used where it is most needed. All the time it can be transparent to show when and where the government is trying to influence it or get backhanders.\n\n(1) ‘The Richest World Leaders Are Even Richer Than You Thought’, Huffington Post, 29 November 2013, http://www.huffingtonpost.com/2013/11/29/richest-world-leaders_n_4178514.html?utm_hp_ref=tw\n", "title": "" } ]
arguana
8a7cb6082743ab47c270e5a415bbfdc8
Eurobonds would create problems for Germany The situation that is implemented in the Status Quo, with the Economic Stability Mechanism trying to save countries in collapse will no longer be an option after introducing Eurobonds. Previous arguments have explained how interest rates will not be lowered enough to make countries stable again but another problem is that they will inhibit any chance of a plan B. First of all, Germany has low interest rates for its government bonds and had it this way in the last few years through the crisis. [1] This is allows them to take loans cheaply helping to sustain their manufacturing industry and government spending, and allowing Germany to finance bailouts. If Germany's borrowing costs rose to the Eurozone average, it could cost Berlin an extra €50bn a year in repayments – almost 2% of its GDP. [2] This will clearly impact on Berlin’s ability and willingness to contribute to the European Stability Mechanism with the knock on effect that if despite Eurobonds another bailout is needed it may not be possible to raise the funds to actually carry out that bailout. Secondly, the Eurobonds create obvious winners and losers; Germany and other prudent nations such as Austria and Finland, as well as the slightly more profligate France will have to suffer the consequences of the economic crisis caused by other countries in the union; Greece, Ireland, Spain and Portugal. With higher interest rates they will need to engage in their own austerity campaigns to compensate which will affect economic growth and create discontent. Why should we punish Germany for the wrongdoing of other states? [1] Bloomberg, ‘Rates &amp; Bonds’, accessed 15 October 2013, http://www.bloomberg.com/markets/rates-bonds/ [2] Inman, Philip, ‘Eurobonds: an essential guide’, theguardian.com, 24 May 2012, http://www.theguardian.com/business/2012/may/24/eurobonds-an-essential-guide
[ { "docid": "dd88c34e907e922d249e9fea09263916", "text": "economic policy eurozone crisis finance house would introduce eurobonds There is a common responsibility in the European Union for helping countries that are hit harder by economic crises than the others. If Eurobonds create winners and losers, the same thing can be said about the economic crisis. Germany was one of the winners and therefore has the duty to help the others. The Eurozone crisis has created a bigger demand for German bonds and lowered the interest rate they have to pay. Germany has such low interest rates because Spain, Italy and Greece are incapable of sustaining their debt, it is therefore a safe haven for people who want to buy government bonds. It is estimated that Germany gained 41 billion euros [1] in ‘profit’ from these lower interest rates as a result of the crisis and therefore has the ability and the moral duty to help countries that are worse-off. More than that, every prudent creditor has a profligate debtor. French and German banks could risk loosing a few hundred millions each if Greece defaults, the creditor accepted the risk when they lent the money. [2] We should remember that the core of the economic success of countries such as Germany has been the Euro helping to increase exports; these exports were what Greeks were buying with the credit they were getting from foreign banks.\n\n[1] SPIEGEL/cro, ‘Profiteering: Crisis Has Saved Germany 40 Billion Euros’, Spiegel Online, 19 August 2013, http://www.spiegel.de/international/europe/germany-profiting-from-euro-crisis-through-low-interest-rates-a-917296.html\n\n[2] Slater, Steve, and Laurent, Lionel, ‘Analysis: Greek debt shadow looms over European banks’, Reuters, 20 April 2011, http://www.reuters.com/article/2011/04/20/us-europe-banks-idUSTRE73J4BZ20110420\n", "title": "" } ]
[ { "docid": "bc5b41b5e69f15c212a2da1bb00a1d3e", "text": "economic policy eurozone crisis finance house would introduce eurobonds Sometimes, a leap of faith is what needs to be taken in order to fix such big problems. First of all the willingness of the union to do more in helping countries that having difficulties will improve its image both in these countries and abroad because it will show the EU sticking to its core principles. Even if we agree that Eurobonds might be a risky idea, something needs to be done to fix the economy. We have clearly seen how bailouts do not work and are not providing a permanent solution. The Eurozone is likely to decide on a third bailout for Greece in November 2013 and little proof that this will make the situation better for the Greeks. [1] Furthermore, the temporary solution of bailouts is taken without the consent of the electorate so the problem of a democratic deficit exists in both cases. Acting now to end the crisis will mean a possible end to such sticking plasters being applied without democratic consent. The EU will then be able to concentrate on demonstrating the advantages of the solution it has taken.\n\n[1] Strupczwski, Jan, ‘Decision on third Greek bailout set for November: officials’, Reuters, 5 September 2013, http://www.reuters.com/article/2013/09/05/us-eurozone-greece-idUSBRE9840NN20130905\n", "title": "" }, { "docid": "5c6a2f593f29db5a423c06c3e5c8883f", "text": "economic policy eurozone crisis finance house would introduce eurobonds Moral hazard is not going to happen in the European Union because alongside the benefits of the Eurobonds comes the control from the European Central Bank or other measures imposed by the rest of the members. This is already happening in the status quo, where countries are forced to impose austerity measures in order to receive bailout founds. [1] Under the model proposed where the ECB can control the lending ability of any country in the union, by allowing the loan or denying it at a certain limit. Countries will most certainly be held accountable if they fail to pay back their loans by not giving them access to further bond issuing. Eurobonds are not a tap governments can use for spending recklessly.\n\n[1] Garofalo, Pat, ‘Greek Austerity, the Sequel’, U.S.News, 9 July 2013, http://www.usnews.com/opinion/blogs/pat-garofalo/2013/07/09/imf-forces-more-austerity-on-greece-in-return-for-bailout-loans\n", "title": "" }, { "docid": "7028fe733d4498adffc9c0ed21c156c1", "text": "economic policy eurozone crisis finance house would introduce eurobonds There are some assumptions made in the construction of this argument. First of all, you can’t hide the risk from the economic community. There is no guarantee that when issuing Eurobonds, the interest rates will drop. This is happening for two main reasons.\n\nFirstly, according to the proposition model, the bonds will still be issued at a national level, showing investors if the money is going to Spain, Italy or Germany, France. While these should in theory have the same interest rates will investors really buy Eurobonds where the money is destined for Greece if not getting much interest? Perception still matters to the markets; will Greece and Germany really suddenly be perceived in the same way.\n\nSecondly, even if the European Union decides to borrow money as a whole, its image is not a good one. Everybody knows the major problems that the union is facing right now so it is possible that concerns about the stability of the Euro as a whole will mean Eurobonds drive interest rates up, not down. Greece was still downgraded after its first bailout from CCC to C by the Fitch Financial Service even if the money were backed up by the ECB, being backed by the whole zone did not change the local fundamentals. [1]\n\n[1] AP/AFP, ‘Greek Credit Downgraded Even With Bailout’, Voice of America, 21 February 2012, http://www.voanews.com/content/greek-credit-downgraded-even-with-bailout-139975563/152377.html\n", "title": "" }, { "docid": "90acf17dabf9b7fd869e965d2e942bd9", "text": "economic policy eurozone crisis finance house would introduce eurobonds The problem with long-term regulations is not that they do not exist but rather the fact that they are not imposed. There is no need for further control and regulation when the European Union already has a mechanism that will prevent economic crisis if it is stuck to. The Maastricht Treaty clearly states that countries in the European Union shall not have a government deficit that exceeds 3% of the GDP and the government debt was limited to be no larger than 60% of the GDP. [1] These measures should be enough to prevent any country in the union to collapse. The major problem was that the Maastricht Treaty was not respected by the member states and little or no sanctions were imposed to ensure compliance. Even comparatively stable countries have deficits above 3%, France had a deficit of 4.8% in last year. [2] The simple solution would be keeping the regulation of the already existing treaty and sanction countries that exceed their deficits and not impose new rules.\n\n[1] Euro economics, ‘Maastricht Treaty’, http://www.unc.edu/depts/europe/euroeconomics/Maastricht%20Treaty.php\n\n[2] The World Factbook, ‘Budget surplus (+) or Deficit (-)’, cia.gov, 2013, https://www.cia.gov/library/publications/the-world-factbook/fields/2222.html\n", "title": "" }, { "docid": "1554742488ee5250ed6a06ff0c88b415", "text": "economic policy eurozone crisis finance house would introduce eurobonds Integration cannot happen on the hoof. The euro crisis and the political and social distress in the European Union have created negative sentiments when talking about the Union. The European citizens do not want these kinds of measures and there is a general sentiment of euro skepticism. Countries like Germany are no longer interested in paying for Greek mistakes and Angela Merkel is strongly opposing the idea of Eurobonds, saying that Germany might leave the union. [1] Clearly this is not the time to be forcing through more integration against the will of the people.\n\nMore than that extremist parties are on the rise. An anti-Muslim, anti-immigration and anti-integration party, France’s National Front has come out top in a poll of how French people will vote European Union Parliament elections. [2] In contrary to the false connection between poor economy and extremism, it comes in hand the fact that the National Front reached the runoff in the 2002 French presidential elections. [3] In conclusion, people are not willing to invest more in the union but rather wanted to take a step back from integration even before the crisis.\n\n[1] Cgh, ‘The Coming EU Summit Clash: Merkel Vows ‘No Euro Bonds as Long as I Live’, Spiegel Online, 27 June 2012, http://www.spiegel.de/international/europe/chancellor-merkel-vows-no-euro-bonds-as-long-as-she-lives-a-841163.html\n\n[2] Mahony, Honor, ‘France’s National Front tops EU election survey’, euobserver.com, 9 October 2013, http://euobserver.com/political/121724\n\n[3] Oakley, Robin, and Bitterman, Jim, ‘Le Pen upset causes major shock’, CNN World, 21 April 2002, http://edition.cnn.com/2002/WORLD/europe/04/21/france.election/?related\n", "title": "" }, { "docid": "b533c14792c83d41aa65143858a13a19", "text": "economic policy eurozone crisis finance house would introduce eurobonds Eurobonds create moral hazard\n\nThe policy proposed will shift responsibility for bad economic decisions and create moral hazard due to the lack of accountability. If the European Union decides to introduce bonds with the same interest rate for all countries, everyone in the union will have to suffer for the mistakes made by Ireland, Greece, Spain, Italy or Portugal (or any other state that may make them in the future). The burden will be shifted to the whole union in the form of higher interest rates for the prudent and countries that made mistakes in the past will pay no price for their economic instability and poor decision-making. This situation will happen if the Eurobonds indeed function as they are planned to and the interest rates will be kept low by comparison to the current rates for Greece, Italy etc. More than that, this situation will lead to what economist call the moral hazard. Moral hazard appears where a person, institution or national government in this case is not made responsible for past actions and so does not change their ways in response; insulating someone from the consequences of their actions takes the learning out of their actions. If countries in distress are not made responsible for their irrational spending made in the past (not just governments but also having trade deficits, banks too willing to lend etc.), there is no reason why these countries should alter their approach to the economy. Accountability to the market is what will resolve the economic crisis and prevent another. This can only be done without Eurobonds.\n", "title": "" }, { "docid": "ab3a312f5964fd16cc6ccbe78397f7bd", "text": "economic policy eurozone crisis finance house would introduce eurobonds Eurobonds create a long term burden\n\nIntroducing Eurobonds will increase the burden for the European Union as a whole and change the responsibility in the long-term. Right now, countries are willing to help one-another and the best example is the European Stability Mechanism, a program designed to help countries in distress with major economic potential. [1] This is happening because the European Union is not fully responsible for the mistakes of the countries in the Eurozone. Of course, Eurobonds is just taking a step further but it also promotes a bigger burden for the union. Such a long term burden should not be decided and imposed in a time of crisis. If we let the European Union and the ECB decide to back national loans and approve Eurobonds it will effectively be imposed upon the people. The idea is not popular with many national electorates and such a decision will have to be taken without their consent. Germany is the clearest example, in a ZDF television poll, 79% said that they are opposing the idea of Eurobonds. [2] The real problem is that this is a one way street, it would be very difficult to reverse course as interest rates would immediately shoot up again thus immediately recreating the crisis if there were such an attempt. Any attempt at imposition without a clear democratic mandate throughout the union could seriously damage the EU by creating a popular backlash.\n\n[1] European Stability Mechanism, ‘About the ESM’, esm.europa.eu, http://www.esm.europa.eu\n\n[2] AP, ‘Poll: Germans strongly against eurobonds’, Bloomberg Businessweek, 25 November 2011, http://www.businessweek.com/ap/financialnews/D9R7R5J81.htm\n", "title": "" }, { "docid": "9b6db8b9a24b37d60c1856d95381c9a9", "text": "economic policy eurozone crisis finance house would introduce eurobonds Eurobonds even up interest rates within the Union\n\nIntroducing Eurobonds will lower interest rates for bonds issued by national governments so making the loans affordable. The most recent example of this problem is the need of recapitalization of banks in Cyprus. Although government debt and interest rates were not the direct problem if the government had been able to borrow at low interest rates to recapitalize its own banks then it would have not needed a bailout from the rest of the Eurozone. [1] In order to avoid these kinds of solutions and put people back to work in countries like Portugal, Italy or Spain, national governments need a bigger demand for their bonds so that interest rates go down.\n\nRight now, sovereign-bonds are not affordable for the government as their interest rates are extremely high. Greece has an interest rate of 9.01%, Portugal 6.23%, and Italy and Spain near 4.30%. [2] If we choose to bundle the bonds together we will obtain a single interest rate that will lower the price of bonds and permit countries to borrow more, the price would be closer to Germany’s than Greece’s as the Eurozone as a whole is not more risky than other big economies. More than that, the markets won’t be worry anymore of the possible default of countries like Greece; as the bonds are backed up by the ECB and indirectly by other countries in the union, the debtors will know that their loans will be repaid because in the last resort more financially solvent countries take on the burden. When the risk of default is eliminated, the demand for government bonds will rise and the interest rates will go down. It is estimated that Italy could save up to 4% of its GDP [3] and Portugal would see annual repayments fall by 15bn euros, or 8% of its GDP. [4]\n\n[1] Soros, George, ‘How to save the European Union’, theguardian.com, 9 April 2013, http://www.theguardian.com/business/2013/apr/09/eurozone-crisis-germany-eurobonds\n\n[2] Bloomberg, ‘Rates &amp; Bonds’, accessed 15 October 2013, http://www.bloomberg.com/markets/rates-bonds/\n\n[3] Soros, George, ‘How to save the European Union’, theguardian.com, 9 April 2013, http://www.theguardian.com/business/2013/apr/09/eurozone-crisis-germany-eurobonds\n\n[4] Soros, George, ‘How to save the European Union’, theguardian.com, 9 April 2013, http://www.theguardian.com/business/2013/apr/09/eurozone-crisis-germany-eurobonds\n", "title": "" }, { "docid": "ef6c3d90884ea68fdb1f7e372c9eab60", "text": "economic policy eurozone crisis finance house would introduce eurobonds The long term benefits of Eurobonds\n\nThe European Union should not only focus on the present but also try to find a permanent solution in resolving and preventing economic crisis. The solution that is implemented right now through the European Stability Mechanism is a temporary one and has no power in preventing further crisis. First of all, the failure of the European Union to agree on banks bailout is a good example. [1] As economic affairs commissioner Olli Rehn admitted the bailout negotiations have been \"a long and difficult process\" [2] because of the many institutions and ministers that have a say in making the decision. More than that, it sometimes takes weeks and even months until Germany and other leaders in the union can convince national parliaments to give money in order for us to be able to help those in need.\n\nIssuing bonds as a union of countries will provide more control to the ECB that will be able to approve or deny a loan – one option would be that after a certain limit countries would have to borrow on their own. [3] This will prevent countries from borrowing and spending irrationally like Greece, Portugal, Spain and Italy did in the past. The unsustainable economic approach can be easily seen in the fact that public sector wages in Greece rose 50% between 1999 and 2007 - far faster than in most other Eurozone countries. [4] Clearly Greece could make the choice to go separately to the market to fund this kind of spending but it would be unlikely to do so.\n\n[1] Spiegel, Peter, ‘EU fails to agree on bank bailout rules’, The Financial Times, 22 June 2013, http://www.ft.com/intl/cms/s/0/8bbfdf84-daeb-11e2-a237-00144feab7de.html#axzz2hG9t5YAS\n\n[2] Fox, Benjamin, ‘Ministers finalise €10 billion Cyprus bailout’, euobserver.com, 13 April 2013, http://euobserver.com/economic/119782\n\n[3] Plumer, Brad, ‘Can “Eurobonds” fix Europe?’, The Washington Post, 29 May 2012, http://www.washingtonpost.com/blogs/wonkblog/post/why-eurobonds-wont-be-enough-to-fix-europe/2012/05/29/gJQACjR1yU_blog.html\n\n[4] BBC News, ‘Eurozone crisis explained’, 27 November 2012, http://www.bbc.co.uk/news/business-13798000\n", "title": "" }, { "docid": "a9fb9af651cd46747e7c2d59b9291db8", "text": "economic policy eurozone crisis finance house would introduce eurobonds Eurobonds help European integration\n\nOne of the most important European Union principles is solidarity and mutual respect among European citizens [1] and this can only be achieved by more integration and stronger connections between states. The economic crisis has clearly shown that more integration is necessary if Europe is to prevent suffering and economic hardship.\n\nFrom the economic perspective, unemployment rates reached disastrous levels in 2012 with Greece at 24,3% and Spain 25%. [2] There is a lack of leadership and connection between countries in the European Union that is not allowing them to help one-another and solve the economic crisis.\n\nFrom the political point of view the result of this is that extremist parties are on the rise with the best example of Golden Dawn in Greece. [3] While in 1996 and 2009 the party didn’t win any seats in the Greek Parliament, after the crisis hit in June 2012 they won 18 seats. [4] In time of distress, the logical solution is not that every country should fight for itself but rather the willingness to invest and integrate more in the union to provide a solution for all.\n\nEurobonds provides the integration that will help prevent these problems, it will both halt the current crisis of government debts because governments will have lower interest repayments and not have the threat of default, and it will show solidarity between members. This in turn will help any future integration as showing that Europe cares for those in difficulty will make everyone more willing to invest in the project.\n\n[1] Europa, ‘The founding principles of the Union’, Europa.eu, http://europa.eu/scadplus/constitution/objectives_en.htm#OBJECTIVES\n\n[2] Eurostat, ‘Unemployment rate, 2001-2012 (%)’, European Commission, 27 June 2013, http://epp.eurostat.ec.europa.eu/statistics_explained/index.php?title=File:Unemployment_rate,_2001-2012_(%25).png&amp;filetimestamp=20130627102805\n\n[3] ‘Golden Dawn party’, The Guardian, http://www.theguardian.com/world/golden-dawn\n\n[4] Henley, Jon, and Davies, Lizzy, ‘Greece’s far-right Golden Dawn party maintains share of vote’, theguardian.com, 18 June 2012 http://www.theguardian.com/world/2012/jun/18/greece-far-right-golden-dawn\n", "title": "" } ]
arguana
0edde255150f96c020c6f32a1c9b8257
The government has no right to tell business what it should charge for its goods. It should be up to business what it charges for its goods; if it decides to charge less than the cost price, it must have a market-based reason to do so, and it is not the place of government to intervene. It is well-known that consumers focus on the prices of a few staple goods, such as bread, milk, baked beans, etc. So it is rational for retailers with high fixed costs (in wages, rents, power etc.) to set the prices for these key products very low, and even make a loss on selling them, because it will entice more shoppers into their stores. These consumers will also buy other products on which the store does make a profit, and overall sales volumes and profits will rise.
[ { "docid": "ed6587a7370e691dd70f8271697cd18e", "text": "business economy general house would prohibit retailers selling certain items The government should be able to stop large retailers from exploiting consumers and producers. There is no doubt that retailers have a reason for selling items below market value, but they are only able to profit from such an illogical strategy by exploiting consumers and producers. They trick consumers into buying more expensive items and they force producers who have minimal leverage to lower the wholesale price in order to take the loss leader price into account.\n", "title": "" } ]
[ { "docid": "b8752e9ddd044fb2a7eb24f36c34a9f1", "text": "business economy general house would prohibit retailers selling certain items There is a good and a bad side to loss leaders for consumers, but prohibiting the practice will always be worse. The obvious benefit to consumers of loss leaders is that they are inexpensive goods to buy. While it is possible that some people will then buy more expensive products because they have entered the store, every item has a price tag, so the customer is always aware of his decision, which means this is not a predatory practice.\n\nBanning loss leaders, on the other hand, is catastrophic for consumers, as it will always result in prices rising. When announcing the repeal of Ireland's loss leaders prohibition, Irish Minister for Enterprise, Trade &amp; Employment Micheál Martin said, “Very simply, the [law] acted against the interests of consumers for the past 18 years.”1 Loss leaders have positive and negative effects on consumers, but a ban is all bad.\n\n1 Ireland Business News, “Groceries Order abolition.”\n", "title": "" }, { "docid": "2d436d37adb22b3957d21acedc8ce52a", "text": "business economy general house would prohibit retailers selling certain items It is not the government's place to force lifestyles on people. There is plenty of information around on what constitutes a balanced and healthy diet; people should be left to make up their own minds about what they buy with their own money. In any case, loss leaders make very little difference to the overall price comparison between processed and fresh food. Fresh food like fruit, vegetables and raw meat is expensive because it will soon rot and so it incurs higher transport and storage costs than processed food with a long shelf life. If governments want to change the balance in costs, they would be better off putting a tax on the unhealthiest foods rather than interfering arbitrarily in the realm of the marketing.\n", "title": "" }, { "docid": "bb8e07f8a953fabd8e50cd91c37ebd47", "text": "business economy general house would prohibit retailers selling certain items Loss leaders do not help lower-income customers because they are aimed at people who will buy a lot of expensive goods at the store. Patrick DeGraba of the U.S. Federal Trade Commission argues that, when retailers act strategically, loss leaders are aimed at highly profitable customers1. Retailers have no interest in targeting less well-off consumers, because they won't then spend a lot of money in the store. Therefore, they are more likely to offer a high-quality item below its true cost; this will still be too expensive for many people, though. For example, stores will offer discounts on high-quality turkeys at Thanksgiving, because people who buy them are likely to buy a lot of food. Loss leaders may provide discounts for some consumers, but prohibiting the strategy would not hurt lower-income customers. 1: Patrick DeGraba, \"Volume Discounts, Loss Leaders, and Competition for More Profitable Customers,\" Federal Trade Commission Bureau of Economics (Working Paper 260), 2003.\n", "title": "" }, { "docid": "3813e2fa9b4dbb9ebda8dd03f75f867d", "text": "business economy general house would prohibit retailers selling certain items If retailers need to unload an item, it is totally within their rights to do that, as long as they don't use that item to trick consumers into buying more expensive items. Selling off goods at a low price, when not planned, would also not harm producers because it would not be a case of \"retail price management (RPM),\" in which producers agree to sell the product for less to the retailer.\n", "title": "" }, { "docid": "52bf7742999ff61657fa7ce8e43d83e3", "text": "business economy general house would prohibit retailers selling certain items The use of loss leaders allows greater competition in the retail sector. It helps to drive the overall level of prices down by allowing much greater variation in pricing than would be possible if all goods had to be offered at cost price plus a small profit margin. Loss leaders also allow new entrants to make an immediate impact upon a mature marketplace dominated by a small number of entrenched incumbents, and so they are a valuable tool in maintaining price competition over the long term.\n", "title": "" }, { "docid": "7f92399f25e3b48996a090c02411bfba", "text": "business economy general house would prohibit retailers selling certain items The use of loss leaders in marketing campaigns can benefit both retailers and producers. Below-cost price offers are typically used at the introduction of new products in order to encourage consumers to try something for the first time. Whether it is a new vegetable or cheese, a different breakfast cereal or an improved type of soap powder, it is in the interest of farmers and manufacturers to build consumer awareness and market share quickly. In the long run, if consumers like the new product, prices will rise and both producers and retailers will profit from it, so it is quite reasonable that producers are asked to share in the costs of launching it at a discount.\n", "title": "" }, { "docid": "bfef386d268e15ee08cc549c0274f45b", "text": "business economy general house would prohibit retailers selling certain items Loss leaders are an inexpensive option available to less well-off customers.\n\nThe use of heavily discounted loss-leaders is good for shoppers, especially low-income consumers, who are most appreciative of a bargain that will help them stretch their limited budget. Customers are not stupid but instead canny consumers who are well able to see through the marketing ploys of the big retailers. Often price-conscious shoppers will stock up on the most heavily discounted items, but then go elsewhere for the rest of their shop. On the other hand, attempts in countries like France to regulate retailers have just resulted in protection for the existing firms that dominate the marketplace, and in a lack of competition, which drives up the cost of the weekly groceries for everyone. The same items can cost 30% more in France, where loss leading is banned, than in Germany where it is not and discount stores flourish1. Prohibiting this strategy will hurt consumers. 1: Economist, \"Purchasing-power disparity: French shoppers want lower prices, but not more competition,\" May 15, 2008.\n", "title": "" }, { "docid": "7b4b67234a1ab64f2c6e1ccc0f1efff3", "text": "business economy general house would prohibit retailers selling certain items Selling at a loss is a practical way of shifting products that have failed to sell.\n\nRetailers find themselves all the time with stock that they need to unload, that nobody is buying. This is especially a concern with items that have a sell-by date after which they may not be sold and so become worthless. In such a situation, selling below cost price is economically rational, as it means that the retailer realises some money on their stock rather than none at all. Visit any open-air market at 3.00 p.m. and you will see traders slashing the prices of unsold perishable goods for just this reason. If a retailer is going to sell an item below price level, it might as well use that item as a marketing device. Can you imagine the same market trader slashing his prices, but not shouting them to passersby? Sometimes retailers need to sell items below the price level, and they should be allowed to market them cleverly in order to make up for some of the loss in revenue.\n", "title": "" }, { "docid": "57804785a161f15d41267b53d30c22b7", "text": "business economy general house would prohibit retailers selling certain items Banning loss leaders will interfere in the market, causing a net economic loss for society.\n\nBy requiring retailers to sell items at least at cost level, the government is creating an artificial price floor, which will cause prices to rise and create a net loss for society. Basic economics explains that artificial price floors upset the free market, costing a net economic loss for society, which will eventually be paid by all sectors involved.\n\nThe harm that prohibiting loss leaders causes to prices is well documented. According to a study by the French newspaper La Tribune, a basket of identical items costs 30% more in France than it does in Germany, partly because of the ban on loss leaders1. In fact, this is the very reason why Ireland repealed its loss leaders ban. The Minister for Enterprise, Trade &amp; Employment said at the time, \"The single most important reason for getting rid of the [law] is that it has kept prices of groceries in Ireland at an artificially high level.\" Indeed, a study published in the British Food Journal concluded that the Irish law had caused prices to rise, and a separate study came to the same conclusion regarding France's loss leader prohibition. More generally, a report from the American Anti-Trust Institute shows that throughout history, such price laws have typically raised prices to consumers. 1 Economist . \"Purchasing-power disparity: French shoppers want lower prices, but not more competition.\" May 15, 2008.\n", "title": "" }, { "docid": "3593392c50c8c80a35e9ac4d9f37adf2", "text": "business economy general house would prohibit retailers selling certain items The use of loss leaders can have damaging social effects.\n\nTypically it is less healthy products that are heavily discounted, such as alcohol and fatty, sugary and salty processed food. Heavily processed food should cost more than fresh food, but supermarkets don't use fresh fruit or vegetables as loss leaders. The practice tends to distort the shopping behaviour of many of the poorest in society, pushing them into poor diets that lead to obesity, bad dental health and poor nutrition. Banning the practice would make it easier to encourage healthier diets and lifestyles. Selling alcohol below cost price leads to large social harms caused by alcoholism and binge-drinking. The use of alcohol as a loss leader has already been identified as a problem in some countries. In New Zealand, for example, Foodstuffs and Progressive Enterprises—the two companies that own all of the major supermarket chains in the country—agreed not to use alcohol as a loss leader.1 Of course companies in most countries would not agree to such a promise without being prohibited by law, and even New Zealand should go a step further by prohibiting all loss leaders, as alcohol is not the only good that can cause social harm when it is artificially inexpensive.\n\n1 Robert Smith, “Lack of loss-leader sales good news for brand conscious wine industry,”National Business Review (New Zealand), June 19, 2009\n", "title": "" }, { "docid": "b9fbb03efc84409788e45d03905d5f54", "text": "business economy general house would prohibit retailers selling certain items Banning loss leaders protects consumers from predatory marketing tactics.\n\nLoss leader strategies exploit consumers by providing partial, misleading information. Giant retailers are not charities; they do not offer heavily discounted goods in order to help the poor. Instead they have calculated that they can attract price-conscious shoppers in with headline deals on a few loss-leading basics, and then persuade them to pay over the odds on a wider range of goods with big profit margins. In this way, loss leaders are a con trick on consumers who are bewildered by deliberately confusing marketing–an onslaught of advertising and ever-changing promotions to the point that they are unable to compare the prices of rival firms and make a rational choice about where to shop. In their paper, “Loss Leading as an Exploitative Practice,” Zhijun Chen and Patrick Rey show how retailers use loss leaders to trick consumers by giving them incomplete information.1 And in the long term, by driving out smaller retailers and reducing competition in the retail sector, the practice can drive up the overall cost of essentials for everyone.\n\n1 Zhijun Chen and Patrick Rey, “Loss Leading as an Exploitative Practice,” Institut d’Economie Industrielle (IDEI Working Paper #658)\n", "title": "" }, { "docid": "b10371aaa4d09a80f24281a514a08d94", "text": "business economy general house would prohibit retailers selling certain items Banning loss leaders would help suppliers\n\nThe practice of loss leaders is bad for suppliers. Farmers and manufacturers are often forced by the dominant retail giants to participate in discount schemes, sharing the losses at the dictate of the retailer. If they refuse they will be dropped by the retailer and cut off from the marketplace. The American Antitrust Institute has concluded that these \"Resale price maintenance (RPM)\" agreements—which are agreed upon because retailers have all of the leverage—are usually illegal.1 Prohibiting loss leaders will prevent this abuse of market dominance by the big retail companies and ensure a fair deal for our farmers.\n\n1 John B. Kirkwood, Albert Foer, and Richard Burnell, “The American Antitrust Institute On the European Commission’s Proposed Block Exemption Regulation and Guidelines on Vertical Restraints,” American Antitrust Institute, September 27, 2009, page 5-6.\n", "title": "" } ]
arguana
748269ab0575bc921265bb277b363078
National “feel-good factor” Hosting very large sporting events is a great way to advertise a nation, and create a national feel-good factor. When London hosted the games in 2012, a successful event with a successful home team, there was a significant national “feel good factor” [1] . This can bring the benefit of bringing a nation together; particularly important for multi-ethnic countries such as South Africa, it will bring all ethnicities together in a shared experience helping to justify the label of ‘rainbow nation’. As Sports Minister Fikile Mbalula argues “Sport is said to be a national religion in South Africa. In recent years it transcends race, class, language and geographical location.” [2] [1] Hart, Simon, ‘Feelgood factor at London’s Anniversary Games next weekend as a new start for drug-tainted athletics’, The Telegraph, 20 July 2013, http://www.telegraph.co.uk/sport/othersports/athletics/10192473/Feelgood-factor-at-Londons-Anniversary-Games-next-weekend-seen-as-a-new-start-for-drug-tainted-athletics.html [2] Mabalula, Fikile, ‘South Africa: Remarks By the Minister of Sport and Recreation, Honourable Mr Fikile Mbalula At the National Press Club Briefing On the 2013 Afcon At the Csir International Convention Centre’, AllAfrica, 16 January 2013, http://allafrica.com/stories/201301170342.html?page=3
[ { "docid": "444ad0ce50bc577dd67b217e0cd05669", "text": "economic policy sport olympics sport general house believes south africa should The Athens games did not create such a buzz. Many seats were empty in the games. This was in part a result of the poor performance of the host nation as Greece underperformed for an Olympic host nation, not entering the top ten of the medals table (in a games when South Africa only won one gold medal, that of their men’s 4x100m freestyle relay swimming team). Clearly this is a risk any host nation would take; the feel good factor comes from the national team doing well, not simply hosting the games.\n", "title": "" } ]
[ { "docid": "b47991f2bc4d842c3bdf202b46e1fd29", "text": "economic policy sport olympics sport general house believes south africa should Hosting can have a significant cost – the 1976 Montreal games left the city vastly in debt which it did not finish paying off until 2006 [1] . Venues may be under-used after the events, with the 2004 Athens games seeing a large number of venues as unused “white elephants” after the event [2] .\n\n[1] Davenport, 2004\n\n[2] Smith, Helena, ‘Athens 2004 Olympics: what happened after the athletes went home’, The Guardian, 9 May 2012, http://www.theguardian.com/sport/2012/may/09/athens-2004-olympics-athletes-home\n", "title": "" }, { "docid": "3cb608644e39bd0dba993f5be3d12a97", "text": "economic policy sport olympics sport general house believes south africa should South Africa has held events before, such as the World Cup – did that change perceptions of Africa? A well run games can change perceptions among those who visit but it can also damage perceptions. The South African world cup also involved slum clearance as part of a campaign of “beatification”, such actions hardly showcase a nation at its best. [1]\n\nDue to its unique history, an event in South Africa may not have a halo effect for the entire continent. A games in one city will not affect other countries, or people’s perceptions of other African countries.\n\n[1] McDougall, Dan, ‘Slum clearance, South Africa-style’, The Sunday Times, 25 April 2010, http://www.unhcr.org/cgi-bin/texis/vtx/refdaily?pass=463ef21123&amp;id=4bd52eed5\n", "title": "" }, { "docid": "b89db937b3f741b2cff8fa6ab0a9a788", "text": "economic policy sport olympics sport general house believes south africa should Some Olympic events are held outside the main city. The football tournament uses venues across other cities (in the London 2012 games, Coventry, Cardiff and Manchester were amongst the cities hosting matches), and, being landlocked, Johannesburg would have to host the sailing at another venue. Sailing being held in another city is not unusual, in 2012 the sailing was held in Weymouth and in 2008 in Qingdao.\n\nTraining camps are typically held across the whole nation, too.\n\nThe national morale boost typically permeates far wider than just the host city, including the impact in favour of a more sporting culture in the country.\n", "title": "" }, { "docid": "0fcb7c16b02c6378b559e768b4c44b5e", "text": "economic policy sport olympics sport general house believes south africa should Football is also Brazil’s national sport, and Brazil was similarly placed (22nd) in the medal table in 2012. The Olympics need not be hosted just by the countries that are most competitive in the games.\n", "title": "" }, { "docid": "d6430999e1823f48c72d2eb2e28daf77", "text": "economic policy sport olympics sport general house believes south africa should Everything costs money. While the costs are significant, the money spent will regenerate parts of cities, create an image of the host country as a place for business, and create a long lasting legacy through the venues and infrastructure built.\n\nWhile South Africa is not rich as the UK, Greece or Australia, its GDP per capita is around that of Brazil, which is hosting the 2016 Games.\n", "title": "" }, { "docid": "ec14fb5c0c074e8ed408636b06aa3275", "text": "economic policy sport olympics sport general house believes south africa should Economic benefits\n\nWhile hosting a major sporting event is relatively expensive (although Cape Town and Johannesburg already have a number of appropriate venues for some of the events already), hosting major sporting events creates major economic benefits. London got a £10bn economic boost from hosting the 2012 Olympics [1] . This may be higher – many of these benefits are difficult to calculate; how much of a tourism boost is a result of a successful games? Barcelona however just like London had a large boost of tourism following the 1992 Barcelona Games [2] . It raises awareness of the city, and the country, and what it offers as a tourist destination.\n\n[1] Flanders, Stephanie, ‘London 2012 Olympics ‘have boosted UK economy by £9.9bn’’, BBC News, 19 July 2013, http://www.bbc.co.uk/news/uk-23370270\n\n[2] Davenport, Coral, ‘A post-Olympic hurdle for Greece: the whopping bill’, CSMonitor, 1 September 2004, http://www.csmonitor.com/layout/set/r14/2004/0901/p07s01-woeu.html\n", "title": "" }, { "docid": "58b2faf8f05f1cbf0466d8866f0b038a", "text": "economic policy sport olympics sport general house believes south africa should Showcase for a nation and continent\n\nA key reason why countries host the Olympic games is in order to boost their image abroad – China held the 2008 Games in Beijing as part of an exercise in national promotion [1] .\n\nThis would also be an opportunity to change the perceptions of Africa amongst some elements in the outside world, from an inaccurate picture of a “third world” continent with no features other than poverty and violence to a more accurate depiction of a continent which, while having challenges, is having economic growth and advancing human development. South Africa is the best nation to showcase the development of Africa; it is Africa’s biggest economy and one of its most developed.\n\n[1] Rabkin, April, ‘Olympic Games all about China, Chinese’, SFGate, 1 August 2008, http://www.sfgate.com/news/article/Olympic-Games-all-about-China-Chinese-3274954.php\n", "title": "" }, { "docid": "7aee47f5f1be7417cff3692726c6d04c", "text": "economic policy sport olympics sport general house believes south africa should Cost of hosting\n\nThe Olympic games is an expensive thing to host. The 2012 games in London cost nearly £9bn [1] . This cost largely falls on the taxpayer. These large events are notoriously difficult to budget accurately, the 2014 Sochi Winter Olympics having gone vastly over budget with suggestions that it could cost up to $50 billion [2] .\n\nIt is too expensive to host for rich countries as it is – South Africa has a large problem with wealth inequality as it is, and is below the world average GDP per capita [3] . Although it is unlikely to reach such expense the $50 billion for the Sochi Olympics is twice the yearly South African health budget of ZAR 232.5bn. [4] South Africa would be better served using the money to combat HIV and poverty.\n\n[1] Gibson, Owen, ‘London 2012 Olympics will cost a total of £8.921bn, says minister’, The Guardian, 23 October 2012, http://www.theguardian.com/sport/2012/oct/23/london-2012-olympics-cost-total\n\n[2] Kollmeyer, Barbara, ‘Russia’s in-perspective price tag for four-times-overbudget Sochi Olympics: 18 Oprahs’, Marketwatch, 27 November 2013, http://blogs.marketwatch.com/themargin/2013/11/27/russias-in-perspective-price-tag-for-four-times-overbudget-sochi-olympics-18-oprahs/\n\n[3] The World Bank, ‘GDP per capital, PPP (current international $)’, date.worldbank.org, accessed 24 January 2014, http://data.worldbank.org/indicator/NY.GDP.PCAP.PP.CD?order=wbapi_data_value_2012+wbapi_data_value+wbapi_data_value-last&amp;sort=desc\n\n[4] ‘Budget 2013’, PWC, 27 February 2013, http://www.pwc.co.za/en/assets/pdf/budget-speech-summary-2013.pdf\n", "title": "" }, { "docid": "939703f3d00b55d431059c46f9bc634a", "text": "economic policy sport olympics sport general house believes south africa should Hosting only affects one city and one country\n\nUnlike a World Cup, which spreads the benefits more evenly, an Olympic games is focused on one city, generally one which is a major international city. It was expected prior to the games that 90% of economic benefits to the UK of the 2012 games would go to London [1] .\n\nIt is dubious that there would be such big benefits for the continent. South Africa is seen by some in the outside world as somewhat aloof from the rest of Africa due to its particular history, its history of apartheid being rather different from the normal course of African decolonisation. It is doubtful that the 2010 World Cup boosted perceptions of the entire continent.\n\n[1] Grobel, William, ‘What are the London 2012 Olympics worth?’, Brand Valuation News, April 2010, http://www.intangiblebusiness.com/Brand-Services/Marketing-services/Press-coverage/What-are-the-London-2012-Olympics-worth~3072.html\n", "title": "" }, { "docid": "d6a33550a87847e2ecd789b9df26a6b5", "text": "economic policy sport olympics sport general house believes south africa should The Olympics are not South Africa’s ‘national sport’\n\nSouth Africa in part hosted the World Cup because football is the national sport of the country. Sports Minister Fikile Mabalula has declared “In African popularity, the Africa Cup of Nations (AFCON) surpasses even that of a multi-sports event like the All Africa Games.” [1] While there is football in the Olympics other sports that South Africans support such as Rugby are not represented. In the 2012 Olympics South Africa was well down the medal table at 23rd. [2] While it makes sense to make a big investment for intangible benefits for a sport the country loves it makes less sense for the Olympics.\n\n[1] Mabalula, 2013, http://allafrica.com/stories/201301170342.html?page=2\n\n[2] ‘Medal Table’, BBC Sport, 13 August 2012, http://www.bbc.co.uk/sport/olympics/2012/medals/countries\n", "title": "" } ]
arguana
001304895ed4aff55af91e408b55be62
Cost of hosting The Olympic games is an expensive thing to host. The 2012 games in London cost nearly £9bn [1] . This cost largely falls on the taxpayer. These large events are notoriously difficult to budget accurately, the 2014 Sochi Winter Olympics having gone vastly over budget with suggestions that it could cost up to $50 billion [2] . It is too expensive to host for rich countries as it is – South Africa has a large problem with wealth inequality as it is, and is below the world average GDP per capita [3] . Although it is unlikely to reach such expense the $50 billion for the Sochi Olympics is twice the yearly South African health budget of ZAR 232.5bn. [4] South Africa would be better served using the money to combat HIV and poverty. [1] Gibson, Owen, ‘London 2012 Olympics will cost a total of £8.921bn, says minister’, The Guardian, 23 October 2012, http://www.theguardian.com/sport/2012/oct/23/london-2012-olympics-cost-total [2] Kollmeyer, Barbara, ‘Russia’s in-perspective price tag for four-times-overbudget Sochi Olympics: 18 Oprahs’, Marketwatch, 27 November 2013, http://blogs.marketwatch.com/themargin/2013/11/27/russias-in-perspective-price-tag-for-four-times-overbudget-sochi-olympics-18-oprahs/ [3] The World Bank, ‘GDP per capital, PPP (current international $)’, date.worldbank.org, accessed 24 January 2014, http://data.worldbank.org/indicator/NY.GDP.PCAP.PP.CD?order=wbapi_data_value_2012+wbapi_data_value+wbapi_data_value-last&amp;sort=desc [4] ‘Budget 2013’, PWC, 27 February 2013, http://www.pwc.co.za/en/assets/pdf/budget-speech-summary-2013.pdf
[ { "docid": "d6430999e1823f48c72d2eb2e28daf77", "text": "economic policy sport olympics sport general house believes south africa should Everything costs money. While the costs are significant, the money spent will regenerate parts of cities, create an image of the host country as a place for business, and create a long lasting legacy through the venues and infrastructure built.\n\nWhile South Africa is not rich as the UK, Greece or Australia, its GDP per capita is around that of Brazil, which is hosting the 2016 Games.\n", "title": "" } ]
[ { "docid": "b89db937b3f741b2cff8fa6ab0a9a788", "text": "economic policy sport olympics sport general house believes south africa should Some Olympic events are held outside the main city. The football tournament uses venues across other cities (in the London 2012 games, Coventry, Cardiff and Manchester were amongst the cities hosting matches), and, being landlocked, Johannesburg would have to host the sailing at another venue. Sailing being held in another city is not unusual, in 2012 the sailing was held in Weymouth and in 2008 in Qingdao.\n\nTraining camps are typically held across the whole nation, too.\n\nThe national morale boost typically permeates far wider than just the host city, including the impact in favour of a more sporting culture in the country.\n", "title": "" }, { "docid": "0fcb7c16b02c6378b559e768b4c44b5e", "text": "economic policy sport olympics sport general house believes south africa should Football is also Brazil’s national sport, and Brazil was similarly placed (22nd) in the medal table in 2012. The Olympics need not be hosted just by the countries that are most competitive in the games.\n", "title": "" }, { "docid": "b47991f2bc4d842c3bdf202b46e1fd29", "text": "economic policy sport olympics sport general house believes south africa should Hosting can have a significant cost – the 1976 Montreal games left the city vastly in debt which it did not finish paying off until 2006 [1] . Venues may be under-used after the events, with the 2004 Athens games seeing a large number of venues as unused “white elephants” after the event [2] .\n\n[1] Davenport, 2004\n\n[2] Smith, Helena, ‘Athens 2004 Olympics: what happened after the athletes went home’, The Guardian, 9 May 2012, http://www.theguardian.com/sport/2012/may/09/athens-2004-olympics-athletes-home\n", "title": "" }, { "docid": "444ad0ce50bc577dd67b217e0cd05669", "text": "economic policy sport olympics sport general house believes south africa should The Athens games did not create such a buzz. Many seats were empty in the games. This was in part a result of the poor performance of the host nation as Greece underperformed for an Olympic host nation, not entering the top ten of the medals table (in a games when South Africa only won one gold medal, that of their men’s 4x100m freestyle relay swimming team). Clearly this is a risk any host nation would take; the feel good factor comes from the national team doing well, not simply hosting the games.\n", "title": "" }, { "docid": "3cb608644e39bd0dba993f5be3d12a97", "text": "economic policy sport olympics sport general house believes south africa should South Africa has held events before, such as the World Cup – did that change perceptions of Africa? A well run games can change perceptions among those who visit but it can also damage perceptions. The South African world cup also involved slum clearance as part of a campaign of “beatification”, such actions hardly showcase a nation at its best. [1]\n\nDue to its unique history, an event in South Africa may not have a halo effect for the entire continent. A games in one city will not affect other countries, or people’s perceptions of other African countries.\n\n[1] McDougall, Dan, ‘Slum clearance, South Africa-style’, The Sunday Times, 25 April 2010, http://www.unhcr.org/cgi-bin/texis/vtx/refdaily?pass=463ef21123&amp;id=4bd52eed5\n", "title": "" }, { "docid": "939703f3d00b55d431059c46f9bc634a", "text": "economic policy sport olympics sport general house believes south africa should Hosting only affects one city and one country\n\nUnlike a World Cup, which spreads the benefits more evenly, an Olympic games is focused on one city, generally one which is a major international city. It was expected prior to the games that 90% of economic benefits to the UK of the 2012 games would go to London [1] .\n\nIt is dubious that there would be such big benefits for the continent. South Africa is seen by some in the outside world as somewhat aloof from the rest of Africa due to its particular history, its history of apartheid being rather different from the normal course of African decolonisation. It is doubtful that the 2010 World Cup boosted perceptions of the entire continent.\n\n[1] Grobel, William, ‘What are the London 2012 Olympics worth?’, Brand Valuation News, April 2010, http://www.intangiblebusiness.com/Brand-Services/Marketing-services/Press-coverage/What-are-the-London-2012-Olympics-worth~3072.html\n", "title": "" }, { "docid": "d6a33550a87847e2ecd789b9df26a6b5", "text": "economic policy sport olympics sport general house believes south africa should The Olympics are not South Africa’s ‘national sport’\n\nSouth Africa in part hosted the World Cup because football is the national sport of the country. Sports Minister Fikile Mabalula has declared “In African popularity, the Africa Cup of Nations (AFCON) surpasses even that of a multi-sports event like the All Africa Games.” [1] While there is football in the Olympics other sports that South Africans support such as Rugby are not represented. In the 2012 Olympics South Africa was well down the medal table at 23rd. [2] While it makes sense to make a big investment for intangible benefits for a sport the country loves it makes less sense for the Olympics.\n\n[1] Mabalula, 2013, http://allafrica.com/stories/201301170342.html?page=2\n\n[2] ‘Medal Table’, BBC Sport, 13 August 2012, http://www.bbc.co.uk/sport/olympics/2012/medals/countries\n", "title": "" }, { "docid": "ec14fb5c0c074e8ed408636b06aa3275", "text": "economic policy sport olympics sport general house believes south africa should Economic benefits\n\nWhile hosting a major sporting event is relatively expensive (although Cape Town and Johannesburg already have a number of appropriate venues for some of the events already), hosting major sporting events creates major economic benefits. London got a £10bn economic boost from hosting the 2012 Olympics [1] . This may be higher – many of these benefits are difficult to calculate; how much of a tourism boost is a result of a successful games? Barcelona however just like London had a large boost of tourism following the 1992 Barcelona Games [2] . It raises awareness of the city, and the country, and what it offers as a tourist destination.\n\n[1] Flanders, Stephanie, ‘London 2012 Olympics ‘have boosted UK economy by £9.9bn’’, BBC News, 19 July 2013, http://www.bbc.co.uk/news/uk-23370270\n\n[2] Davenport, Coral, ‘A post-Olympic hurdle for Greece: the whopping bill’, CSMonitor, 1 September 2004, http://www.csmonitor.com/layout/set/r14/2004/0901/p07s01-woeu.html\n", "title": "" }, { "docid": "add849fa269e07f72c6b2e3724637e58", "text": "economic policy sport olympics sport general house believes south africa should National “feel-good factor”\n\nHosting very large sporting events is a great way to advertise a nation, and create a national feel-good factor. When London hosted the games in 2012, a successful event with a successful home team, there was a significant national “feel good factor” [1] . This can bring the benefit of bringing a nation together; particularly important for multi-ethnic countries such as South Africa, it will bring all ethnicities together in a shared experience helping to justify the label of ‘rainbow nation’. As Sports Minister Fikile Mbalula argues “Sport is said to be a national religion in South Africa. In recent years it transcends race, class, language and geographical location.” [2]\n\n[1] Hart, Simon, ‘Feelgood factor at London’s Anniversary Games next weekend as a new start for drug-tainted athletics’, The Telegraph, 20 July 2013, http://www.telegraph.co.uk/sport/othersports/athletics/10192473/Feelgood-factor-at-Londons-Anniversary-Games-next-weekend-seen-as-a-new-start-for-drug-tainted-athletics.html\n\n[2] Mabalula, Fikile, ‘South Africa: Remarks By the Minister of Sport and Recreation, Honourable Mr Fikile Mbalula At the National Press Club Briefing On the 2013 Afcon At the Csir International Convention Centre’, AllAfrica, 16 January 2013, http://allafrica.com/stories/201301170342.html?page=3\n", "title": "" }, { "docid": "58b2faf8f05f1cbf0466d8866f0b038a", "text": "economic policy sport olympics sport general house believes south africa should Showcase for a nation and continent\n\nA key reason why countries host the Olympic games is in order to boost their image abroad – China held the 2008 Games in Beijing as part of an exercise in national promotion [1] .\n\nThis would also be an opportunity to change the perceptions of Africa amongst some elements in the outside world, from an inaccurate picture of a “third world” continent with no features other than poverty and violence to a more accurate depiction of a continent which, while having challenges, is having economic growth and advancing human development. South Africa is the best nation to showcase the development of Africa; it is Africa’s biggest economy and one of its most developed.\n\n[1] Rabkin, April, ‘Olympic Games all about China, Chinese’, SFGate, 1 August 2008, http://www.sfgate.com/news/article/Olympic-Games-all-about-China-Chinese-3274954.php\n", "title": "" } ]
arguana