title
stringlengths
8
300
abstract
stringlengths
0
10k
Adversarial Message Passing For Graphical Models
Bayesian inference on structured models typically relies on the ability to infer posterior distributions of underlying hidden variables. However, inference in implicit models or complex posterior distributions is hard. A popular tool for learning implicit models are generative adversarial networks (GANs) which learn parameters of generators by fooling discriminators. Typically, GANs are considered to be models themselves and are not understood in the context of inference. Current techniques rely on inefficient global discrimination of joint distributions to perform learning, or only consider discriminating a single output variable. We overcome these limitations by treating GANs as a basis for likelihood-free inference in generative models and generalize them to Bayesian posterior inference over factor graphs. We propose local learning rules based on message passing minimizing a global divergence criterion involving cooperating local adversaries used to sidestep explicit likelihood evaluations. This allows us to compose models and yields a unified inference and learning framework for adversarial learning. Our framework treats model specification and inference separately and facilitates richly structured models within the family of Directed Acyclic Graphs, including components such as intractable likelihoods, non-differentiable models, simulators and generally cumbersome models. A key result of our treatment is the insight that Bayesian inference on structured models can be performed only with sampling and discrimination when using nonparametric variational families, without access to explicit distributions. As a side-result, we discuss the link to likelihood maximization. These approaches hold promise to be useful in the toolbox of probabilistic modelers and enrich the gamut of current probabilistic programming applications.
3D Reconstruction of Interreflection-affected Surface Concavities using Photometric Stereo
Image-based reconstruction of 3D shapes is inherently biased under the occurrence of interreflections, since the observed intensity at surface concavities consists of direct and global illumination components. This issue is commonly not considered in a Photometric Stereo (PS) framework. Under the usual assumption of only direct reflections, this corrupts the normal estimation process in concave regions and thus leads to inaccurate results. For this reason, global illumination effects need to be considered for the correct reconstruction of surfaces affected by interreflections. While there is ongoing research in the field of inverse lighting (i.e. separation of global and direct illumination components), the interreflection aspect remains oftentimes neglected in the field of 3D shape reconstruction. In this study, we present a computationally driven approach for iteratively solving that problem. Initially, we introduce a photometric stereo approach that roughly reconstructs a surface with at first unknown reflectance properties. Then, we show that the initial surface reconstruction result can be refined iteratively regarding non-distant light sources and, especially, interreflections. The benefit for the reconstruction accuracy is evaluated on real Lambertian surfaces using laser range scanner data as ground truth.
Retinal Image Analysis Using Curvelet Transform and Multistructure Elements Morphology by Reconstruction
Retinal images can be used in several applications, such as ocular fundus operations as well as human recognition. Also, they play important roles in detection of some diseases in early stages, such as diabetes, which can be performed by comparison of the states of retinal blood vessels. Intrinsic characteristics of retinal images make the blood vessel detection process difficult. Here, we proposed a new algorithm to detect the retinal blood vessels effectively. Due to the high ability of the curvelet transform in representing the edges, modification of curvelet transform coefficients to enhance the retinal image edges better prepares the image for the segmentation part. The directionality feature of the multistructure elements method makes it an effective tool in edge detection. Hence, morphology operators using multistructure elements are applied to the enhanced image in order to find the retinal image ridges. Afterward, morphological operators by reconstruction eliminate the ridges not belonging to the vessel tree while trying to preserve the thin vessels unchanged. In order to increase the efficiency of the morphological operators by reconstruction, they were applied using multistructure elements. A simple thresholding method along with connected components analysis (CCA) indicates the remained ridges belonging to vessels. In order to utilize CCA more efficiently, we locally applied the CCA and length filtering instead of considering the whole image. Experimental results on a known database, DRIVE, and achieving to more than 94% accuracy in about 50 s for blood vessel detection, proved that the blood vessels can be effectively detected by applying our method on the retinal images.
Fast Anomaly Detection for Streaming Data
This paper introduces Streaming Half-Space-Trees (HS-Trees), a fast one-class anomaly detector for evolving data streams. It requires only normal data for training and works well when anomalous data are rare. The model features an ensemble of random HS-Trees, and the tree structure is constructed without any data. This makes the method highly efficient because it requires no model restructuring when adapting to evolving data streams. Our analysis shows that Streaming HS-Trees has constant amortised time complexity and constant memory requirement. When compared with a state-of-theart method, our method performs favourably in terms of detection accuracy and runtime performance. Our experimental results also show that the detection performance of Streaming HS-Trees is not sensitive to its parameter settings.
Growth rates of black soldier fly larvae fed on fresh human faeces and their implication for improving sanitation.
OBJECTIVES To determine the capacity of black soldier fly larvae (BSFL) (Hermetia illucens) to convert fresh human faeces into larval biomass under different feeding regimes, and to determine how effective BSFL are as a means of human faecal waste management. METHODS Black soldier fly larvae were fed fresh human faeces. The frequency of feeding, number of larvae and feeding ratio were altered to determine their effects on larval growth, prepupal weight, waste reduction, bioconversion and feed conversion rate (FCR). RESULTS The larvae that were fed a single lump amount of faeces developed into significantly larger larvae and prepupae than those fed incrementally every 2 days; however, the development into pre-pupae took longer. The highest waste reduction was found in the group containing the most larvae, with no difference between feeding regimes. At an estimated 90% pupation rate, the highest bioconversion (16-22%) and lowest, most efficient FCR (2.0-3.3) occurred in groups that contained 10 and 100 larvae, when fed both the lump amount and incremental regime. CONCLUSION The prepupal weight, bioconversion and FCR results surpass those from previous studies into BSFL management of swine, chicken manure and municipal organic waste. This suggests that the use of BSFL could provide a solution to the health problems associated with poor sanitation and inadequate human waste management in developing countries.
Dynamic Access Control Policy based on Blockchain and Machine Learning for the Internet of Things
The Internet of Things (IoT) is now destroying the barriers between the real and digital worlds. However, one of the huge problems that can slow down the development of this global wave, or even stop it, concerns security and privacy requirements. The criticality of these latter comes especially from the fact that the smart objects may contain very intimate information or even may be responsible for protecting people’s lives. In this paper, the focus is on access control in the IoT context by proposing a dynamic and fully distributed security policy. Our proposal will be based, on one hand, on the concept of the blockchain to ensure the distributed aspect strongly recommended in the IoT; and on the other hand on machine learning algorithms, particularly on reinforcement learning category, in order to provide a dynamic, optimized and selfadjusted security policy. Keywords—Internet of Things; security; access control; dynamic policy; security policy; blockchain; machine learning; reinforcement learning
The subspace Gaussian mixture model - A structured model for speech recognition
We describe a new approach to speech recognition, in which all Hidden Markov Model (HMM) states share the same Gaussian Mixture Model (GMM) structure with the same number of Gaussians in each state. The model is defined by vectors associated with each state with a dimension of, say, 50, together with a global mapping from this vector space to the space of parameters of the GMM. This model appears to give better results than a conventional model, and the extra structure offers many new opportunities for modeling innovations while maintaining compatibility with most standard techniques.
A Solution to Separation and Multicollinearity in Multiple Logistic Regression.
In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.
Theoretical Comparison of Testing Methods
Comparison of software testing methods is meaningful only if sound theory relates the properties compared to actual software quality. Existing comparisons typically use anecdotal foundations with no necessary relationship to quality, comparing methods on the basis of technical terms the methods themselves define. In the most seriously flawed work, one method whose efficacy is unknown is used as a standard for judging other methods! Random testing, as a method that can be related to quality (in both the conventional sense of statistical reliability, and the more stringent sense of software assurance), offers the opportunity for valid comparison.
PMCW waveform and MIMO technique for a 79 GHz CMOS automotive radar
Automotive radars in the 77-81 GHz band will be widely deployed in the coming years. This paper provides a comparison of the bi-phase modulated continuous wave (PMCW) and linear frequency-modulated continuous wave (FMCW) waveforms for these radars. The comparison covers performance, implementation and other non technical aspects. Multiple Input Multiple Output (MIMO) radars require perfectly orthogonal waveforms on the different transmit antennas, preferably transmitting simultaneously for fast illumination. In this paper, we propose two techniques: Outer code and Range domain, to enable MIMO processing on the PMCW radars. The proposed MIMO techniques are verified with both simulation and lab experiments, on a fully integrated deep-submicron CMOS integrated circuit designed for a 79 GHz PMCW radar. Our analysis shows that, although not widely used in the automotive industry, PMCW radars are advantageous for low cost, high volume single-chip production and excellent performance.
A Polynomial-Time Algorithm for Learning Noisy Linear Threshold Functions
In this paper we consider the problem of learning a linear threshold function (a half-space in n dimensions, also called a ``perceptron''). Methods for solving this problem generally fall into two categories. In the absence of noise, this problem can be formulated as a Linear Program and solved in polynomial time with the Ellipsoid Algorithm or Interior Point methods. Alternatively, simple greedy algorithms such as the Perceptron Algorithm are often used in practice and have certain provable noise-tolerance properties; but their running time depends on a separation parameter, which quantifies the amount of ``wiggle room'' available for a solution, and can be exponential in the description length of the input. In this paper we show how simple greedy methods can be used to find weak hypotheses (hypotheses that correctly classify noticeably more than half of the examples) in polynomial time, without dependence on any separation parameter. Suitably combining these hypotheses results in a polynomial-time algorithm for learning linear threshold functions in the PAC model in the presence of random classification noise. (Also, a polynomial-time algorithm for learning linear threshold functions in the Statistical Query model of Kearns.) Our algorithm is based on a new method for removing outliers in data. Specifically, for any set S of points in R n , each given to b bits of precision, we show that one can remove only a small fraction of S so that in the remaining set T , for every vector v , max x ∈ T (v . x) 2 ≤ poly(n,b) E x ∈ T (v . x) 2 ; i.e., for any hyperplane through the origin, the maximum distance (squared) from a point in T to the plane is at most polynomially larger than the average. After removing these outliers, we are able to show that a modified version of the Perceptron Algorithm finds a weak hypothesis in polynomial time, even in the presence of random classification noise.
Manufacturing flexibility and business strategy : an empirical study of small and medium sized firms
This study investigates the practice of manufacturing flexibility in small and medium sized firms. Using the data collected from 87 firms from machinery and machine tool industries in Taiwan, we analyzed and prescribed the alignment of various manufacturing flexibility dimensions with business strategies. Several practical approaches to developing manufacturing flexibility in small and medium sized firms were discussed. In addition, statistical results indicate that the compatibility between business strategy and manufacturing flexibility is critical to business performance. The one-to-one relationship between business strategy and manufacturing flexibility is established to enable managers to set clear priorities in investing and developing necessary manufacturing flexibility. r 2002 Elsevier Science B.V. All rights reserved.
APPRAISING PUBLIC VALUE : PAST , PRESENT AND FUTURES
Despite the increasing popularity of the concept of ‘public value’ within both academic and practice settings, there has to date been no formal review of the literature on its provenance, empirical basis, and application. This paper seeks to fill this gap. It provides a critical introduction to public value and its conceptual development before presenting the main elements of the published literature. Following this, a series of key areas of disagreement are discussed and implications for future research and practice put forward. The authors argue that if the espoused aspirations for the public value framework are to be realized, a concerted process of research, debate and application is required. Although some criticisms of public value are argued to be unwarranted, the authors acknowledge ongoing concerns over the apparent silence of public value on questions of power and heterogeneity, and the difficulties in empirically testing the framework’s propositions.
Validation of a new mass screening tool for cognitive impairment: Cognitive Assessment for Dementia, iPad version
BACKGROUND We have developed a new screening test for dementia that runs on an iPad and can be used for mass screening, known as the Cognitive Assessment for Dementia, iPad version (CADi). The CADi consists of items involving immediate recognition memory for three words, semantic memory, categorization of six objects, subtraction, backward repetition of digits, cube rotation, pyramid rotation, trail making A, trail making B, and delayed recognition memory for three words. The present study examined the reliability and validity of the CADi. METHODS CADi evaluations were conducted for patients with dementia, healthy subjects selected from a brain checkup system, and community-dwelling elderly people participating in health checkups. RESULTS CADi scores were lower for dementia patients than for healthy elderly individuals and correlated significantly with Mini-Mental State Examination scores. Cronbach's alpha values for the CADi were acceptable (over 0.7), and test-retest reliability was confirmed via a significant correlation between scores separated by a one-year interval. CONCLUSION These results suggest that the CADi is a useful tool for mass screening of dementia in Japanese populations.
Sketchable Histograms of Oriented Gradients for Object Detection
In this paper we investigate a new representation approach for visual object recognition. The new representation, called sketchableHoG, extends the classical histogram of oriented gradients (HoG) feature by adding two different aspects: the stability of the majority orientation and the continuity of gradient orientations. In this way, the sketchableHoG locally characterizes the complexity of an object model and introduces global structure information while still keeping simplicity, compactness and robustness. We evaluated the proposed image descriptor on publicly Catltech 101 dataset. The obtained results outperforms classical HoG descriptor as well as other reported descriptors in the literature.
Mutation of the gene encoding the ROR2 tyrosine kinase causes autosomal recessive Robinow syndrome
Robinow syndrome is a short-limbed dwarfism characterized by abnormal morphogenesis of the face and external genitalia, and vertebral segmentation. The recessive form of Robinow syndrome (RRS; OMIM 268310), particularly frequent in Turkey, has a high incidence of abnormalities of the vertebral column such as hemivertebrae and rib fusions, which is not seen in the dominant form. Some patients have cardiac malformations or facial clefting. We have mapped a gene for RRS to 9q21–q23 in 11 families. Haplotype sharing was observed between three families from Turkey, which localized the gene to a 4.9-cM interval. The gene ROR2, which encodes an orphan membrane-bound tyrosine kinase, maps to this region. Heterozygous (presumed gain of function) mutations in ROR2 were previously shown to cause dominant brachydactyly type B (BDB; ref. 7). In contrast, Ror2−/− mice have a short-limbed phenotype that is more reminiscent of the mesomelic shortening observed in RRS. We detected several homozygous ROR2 mutations in our cohort of RRS patients that are located upstream from those previously found in BDB. The ROR2 mutations present in RRS result in premature stop codons and predict nonfunctional proteins.
Hover , Transition , and Level Flight Control Design for a Single-Propeller Indoor Airplane
This paper presents vehicle models and test flight results for an autonomous fixed-wing airplane that is designed to take-off, hover, transition to and from level-flight modes, and perch on a vertical landing platform in a highly space constrained environment. By enabling a fixed-wing UAV to achieve these feats, the speed and range of a fixed-wing aircraft in level flight are complimented by hover capabilities that were typically limited to rotorcraft. Flight and perch landing results are presented. This capability significantly eases support and maintenance of the vehicle. All of the flights presented in this paper are performed using the MIT Real-time Autonomous Vehicle indoor test ENvironment (RAVEN).
Dihydrofolate-Reductase Mutations in Plasmodium knowlesi Appear Unrelated to Selective Drug Pressure from Putative Human-To-Human Transmission in Sabah, Malaysia
BACKGROUND Malaria caused by zoonotic Plasmodium knowlesi is an emerging threat in Eastern Malaysia. Despite demonstrated vector competency, it is unknown whether human-to-human (H-H) transmission is occurring naturally. We sought evidence of drug selection pressure from the antimalarial sulfadoxine-pyrimethamine (SP) as a potential marker of H-H transmission. METHODS The P. knowlesi dihdyrofolate-reductase (pkdhfr) gene was sequenced from 449 P. knowlesi malaria cases from Sabah (Malaysian Borneo) and genotypes evaluated for association with clinical and epidemiological factors. Homology modelling using the pvdhfr template was used to assess the effect of pkdhfr mutations on the pyrimethamine binding pocket. RESULTS Fourteen non-synonymous mutations were detected, with the most common being at codon T91P (10.2%) and R34L (10.0%), resulting in 21 different genotypes, including the wild-type, 14 single mutants, and six double mutants. One third of the P. knowlesi infections were with pkdhfr mutants; 145 (32%) patients had single mutants and 14 (3%) had double-mutants. In contrast, among the 47 P. falciparum isolates sequenced, three pfdhfr genotypes were found, with the double mutant 108N+59R being fixed and the triple mutants 108N+59R+51I and 108N+59R+164L occurring with frequencies of 4% and 8%, respectively. Two non-random spatio-temporal clusters were identified with pkdhfr genotypes. There was no association between pkdhfr mutations and hyperparasitaemia or malaria severity, both hypothesized to be indicators of H-H transmission. The orthologous loci associated with resistance in P. falciparum were not mutated in pkdhfr. Subsequent homology modelling of pkdhfr revealed gene loci 13, 53, 120, and 173 as being critical for pyrimethamine binding, however, there were no mutations at these sites among the 449 P. knowlesi isolates. CONCLUSION Although moderate diversity was observed in pkdhfr in Sabah, there was no evidence this reflected selective antifolate drug pressure in humans.
Time is of the essence: improving recency ranking using Twitter data
Realtime web search refers to the retrieval of very fresh content which is in high demand. An effective portal web search engine must support a variety of search needs, including realtime web search. However, supporting realtime web search introduces two challenges not encountered in non-realtime web search: quickly crawling relevant content and ranking documents with impoverished link and click information. In this paper, we advocate the use of realtime micro-blogging data for addressing both of these problems. We propose a method to use the micro-blogging data stream to detect fresh URLs. We also use micro-blogging data to compute novel and effective features for ranking fresh URLs. We demonstrate these methods improve effective of the portal web search engine for realtime web search.
Cross-cultural Differences in Mental Health, Quality of Life, Empathy, and Burnout between US and Brazilian Medical Students.
OBJECTIVE This study aimed to compare mental health, quality of life, empathy, and burnout in medical students from a medical institution in the USA and another one in Brazil. METHODS This cross-cultural study included students enrolled in the first and second years of their undergraduate medical training. We evaluated depression, anxiety, and stress (DASS 21), empathy, openness to spirituality, and wellness (ESWIM), burnout (Oldenburg), and quality of life (WHOQOL-Bref) and compared them between schools. RESULTS A total of 138 Brazilian and 73 US medical students were included. The comparison between all US medical students and all Brazilian medical students revealed that Brazilians reported more depression and stress and US students reported greater wellness, less exhaustion, and greater environmental quality of life. In order to address a possible response bias favoring respondents with better mental health, we also compared all US medical students with the 50% of Brazilian medical students who reported better mental health. In this comparison, we found Brazilian medical students had higher physical quality of life and US students again reported greater environmental quality of life. Cultural, social, infrastructural, and curricular differences were compared between institutions. Some noted differences were that students at the US institution were older and were exposed to smaller class sizes, earlier patient encounters, problem-based learning, and psychological support. CONCLUSION We found important differences between Brazilian and US medical students, particularly in mental health and wellness. These findings could be explained by a complex interaction between several factors, highlighting the importance of considering cultural and school-level influences on well-being.
A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions
This paper proposes Deep Learning (DL) models for emotion recognition from facial expressions. We have focused on two “deep” neural models: Convolutional Neural Networks (CNN) and Deep Belief Networks (DBN). For each of these DL neural models, we have chosen several architectures. We have considered both the case of subject independent emotion recognition and also that of subject dependent emotion recognition. One has selected the Support Vector Machine (SVM) as a benchmark algorithm. We have chosen the JAFFE database to evaluate the above proposed models for person independent/dependent facial expression recognition. Using DL approach, we have obtained a subject-independent emotion recognition score of 65.22%, corresponding to an increase of 6% over the best score given by the considered benchmark methods. For person dependent emotion recognition, the DL model leads to the recognition score of 95.71%, representing an increase of 3% over the best of chosen benchmark methods. Key-Words: Facial expression, Deep Learning (DL), Convolutional Neural Networks (CNN), Deep Belief Networks (DBN), subject independent/dependent emotion recognition
Host based intrusion detection using RBF neural networks
A novel approach of host based intrusion detection is suggested in this paper that uses Radial basis Functions Neural Networks as profile containers. The system works by using system calls made by privileged UNIX processes and trains the neural network on its basis. An algorithm is proposed that prioritize the speed and efficiency of the training phase and also limits the false alarm rate. In the detection phase the algorithm provides implementation of window size to detect intrusions that are temporally located. Also a threshold is implemented that is altered on basis of the process behavior. The system is tested with attacks that target different intrusion scenarios. The result shows that the radial Basis Functions Neural Networks provide better detection rate and very low training time as compared to other soft computing methods. The robustness of the training phase is evident by low false alarm rate and high detection capability depicted by the application
The effect of landiolol on cerebral blood flow in patients undergoing off-pump coronary artery bypass surgery
To examine the effect of landiolol on cerebral blood flow in patients with normal or deteriorated cardiac function. Thirty adult patients who were diagnosed with angina pectoris and who underwent elective off-pump coronary artery bypass surgery were studied. Patients were divided into two groups, one with a preoperative left ventricular ejection fraction (EF) of 50% or higher (normal EF group; n = 15) and the other with an EF of less than 50% (low EF group; n = 15). The mean cerebral blood flow velocity (Vmca) and pulsatility index (PI) in the middle cerebral artery were recorded using transcranial Doppler ultrasonography (TCD). Individual hemodynamic data were obtained using a pulmonary arterial catheter. In both groups, landiolol produced a significant decrease in heart rate (HR), which then returned to baseline 15 min after administration was completed. A significant decrease in mean arterial pressure occurred in the low EF group, but the decrease was within 30% of the baseline. In the normal EF group, there was no decrease in cardiac index (CI), whereas in the low EF group, CI significantly decreased along with the decrease in HR. There were no significant differences in Vmca and PI between the two groups. Continuous administration of landiolol at a dose of 0.04 mg/kg/min after 1 min rapid IV administration at a dose of 0.125 mg/kg/min decreases HR without causing aggravation of CBF during treatment of intraoperative tachycardia in patients with normal and deteriorated cardiac function.
An Empirical Study on Measuring the Success of Knowledge Repository Systems
This paper proposes and empirically validates a Knowledge Repository Systems (KRS) Success Model. Based on Mason's information influence theory, we developed a more comprehensive framework for KRS success measurement by combining DeLone and McLean's IS Success Model with Markus's knowledge reusability concept. The data were collected through a survey of 110 KRS users in China and Singapore. The empirical results demonstrate that KRS success should be measured at different stages of knowledge reuse as well as a series of influence on KRS users, and these KRS success dimensions are interrelated.
Multiview Hessian Regularization for Image Annotation
The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.
Accurate Supervised and Semi-Supervised Machine Reading for Long Documents
We introduce a hierarchical architecture for machine reading capable of extracting precise information from long documents. The model divides the document into small, overlapping windows and encodes all windows in parallel with an RNN. It then attends over these window encodings, reducing them to a single encoding, which is decoded into an answer using a sequence decoder. This hierarchical approach allows the model to scale to longer documents without increasing the number of sequential steps. In a supervised setting, our model achieves state of the art accuracy of 76.8 on the WikiReading dataset. We also evaluate the model in a semi-supervised setting by downsampling the WikiReading training set to create increasingly smaller amounts of supervision, while leaving the full unlabeled document corpus to train a sequence autoencoder on document windows. We evaluate models that can reuse autoencoder states and outputs without finetuning their weights, allowing for more efficient training and inference.
Automatic logging of operating system effects to guide application-level architecture simulation
Modern architecture research relies heavily on application-level detailed pipeline simulation. A time consuming part of building a simulator is correctly emulating the operating system effects, which is required even if the goal is to simulate just the application code, in order to achieve functional correctness of the application's execution. Existing application-level simulators require manually hand coding the emulation of each and every possible system effect (e.g., system call, interrupt, DMA transfer) that can impact the application's execution. Developing such an emulator for a given operating system is a tedious exercise, and it can also be costly to maintain it to support newer versions of that operating system. Furthermore, porting the emulator to a completely different operating system might involve building it all together from scratch.In this paper, we describe a tool that can automatically log operating system effects to guide architecture simulation of application code. The benefits of our approach are: (a) we do not have to build or maintain any infrastructure for emulating the operating system effects, (b) we can support simulation of more complex applications on our application-level simulator, including those applications that use asynchronous interrupts, DMA transfers, etc., and (c) using the system effects logs collected by our tool, we can deterministically re-execute the application to guide architecture simulation that has reproducible results.
Inhibition of breast cancer cell migration by activation of cAMP signaling
Almost all deaths from breast cancer arise from metastasis of the transformed cells to other sites in the body. Hence, uncovering a means of inhibiting breast cancer cell migration would provide a significant advance in the treatment of this disease. Stimulation of the cAMP signaling pathway has been shown to inhibit migration and motility of a number of cell types. A very effective way of selectively stimulating cAMP signaling is through inhibition of cyclic nucleotide phosphodiesterases (PDEs). Therefore, we examined full expression profiles of all known PDE genes at the mRNA and protein levels in four human breast cancer cell lines and eight patients’ breast cancer tissues. By these analyses, expression of almost all PDE genes was seen in both cell lines and tissues. In the cell lines, appreciable expression was seen for PDEs 1C, 2A, 3B, 4A, 4B, 4D, 5A, 6B, 6C, 7A, 7B, 8A, 9A, 10A, and 11A. In patients’ tissues, appreciable expression was seen for PDEs 1A, 3B, 4A, 4B, 4C, 4D, 5A, 6B, 6C, 7A, 7B, 8A, 8B, and 9A. PDE8A mRNA in particular is prominently expressed in all cell lines and patients’ tissue samples examined. We show here that stimulation of cAMP signaling with cAMP analogs, forskolin, and PDE inhibitors, including selective inhibitors of PDE3, PDE4, PDE7, and PDE8, inhibit aggressive triple negative MDA-MB-231 breast cancer cell migration. Under the same conditions, these agents had little effect on breast cancer cell proliferation. This study demonstrates that PDE inhibitors inhibit breast cancer cell migration, and thus may be valuable therapeutic targets for inhibition of breast cancer metastasis. Since PDE8A is expressed in all breast cancer samples, and since dipyridamole, which inhibits PDE8, and PF-04957325, a selective PDE8 inhibitor, both inhibit migration, it suggests that PDE8A may be a valuable novel target for treatment of this disease.
A Hybrid Approach for Automatic Classification of Brain MRI Using Genetic Algorithm and Support Vector Machine
We purpose a hybrid approach for classification of brain tissues in magnetic resonance images (MRI) based on genetic algorithm (GA) and support vector machine (SVM). A wavelet based texture feature set is derived. The optimal texture features are extracted from normal and tumor regions by using spatial gray level dependence method (SGLDM). These features are given as input to the SVM classifier. The choice of features, which constitute a big problem in classification techniques, is solved by using GA. These optimal features are used to classify the brain tissues into normal, benign or malignant tumor. The performance of the algorithm is evaluated on a series of brain tumor images.
Learning Universal Adversarial Perturbations with Generative Models
Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. It was recently shown that given a dataset and classifier, there exists so called universal adversarial perturbations, a single perturbation that causes a misclassification when applied to any input. In this work, we introduce universal adversarial networks, a generative network that is capable of fooling a target classifier when it's generated output is added to a clean sample from a dataset. We show that this technique improves on known universal adversarial attacks.
Data Compression
This paper surveys a variety of data compression methods spanning almost 40 years of research, from the work of Shannon, Fano, and Huffman in the late 1940s to a technique developed in 1986. The aim of data compression is to reduce redundancy in stored or communicated data, thus increasing effective data density. Data compression has important application in the areas of file storage and distributed systems. Concepts from information theory as they relate to the goals and evaluation of data compression methods are discussed briefly. A framework for evaluation and comparison of methods is constructed and applied to the algorithms presented. Comparisons of both theoretical and empirical natures are reported, and possibilities for future research are suggested.
A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots
A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.
Controlled medial branch anesthetic block in the diagnosis of chronic lumbar facet joint pain: the value of a three-month follow-up
OBJECTIVES To verify the incidence of facetary and low back pain after a controlled medial branch anesthetic block in a three-month follow-up and to verify the correlation between the positive results and the demographic variables. METHODS Patients with chronic lumbar pain underwent a sham blockade (with a saline injection) and then a controlled medial branch block. Their symptoms were evaluated before and after the sham injection and after the real controlled medial branch block; the symptoms were reevaluated after one day and one week, as well as after one, two and three months using the visual analog scale. We searched for an association between the positive results and the demographic characteristics of the patients. RESULTS A total of 104 controlled medial branch blocks were performed and 54 patients (52%) demonstrated >50% improvements in pain after the blockade. After three months, lumbar pain returned in only 18 individuals, with visual analogue scale scores >4. Therefore, these patients were diagnosed with chronic facet low back pain. The three-months of follow-up after the controlled medial branch block excluded 36 patients (67%) with false positive results. The results of the controlled medial branch block were not correlated to sex, age, pain duration or work disability but were correlated with patient age (p<0.05). CONCLUSION Patient diagnosis with a controlled medial branch block proved to be effective but was not associated with any demographic variables. A three-month follow-up is required to avoid a high number of false positives.
Updates in ANCA-associated vasculitis.
Antineutrophil cytoplasm antibody (ANCA)-associated vasculitides are small-vessel vasculitides that include granulomatosis with polyangiitis (formerly Wegener's granulomatosis), microscopic polyangiitis, and eosinophilic granulomatosis with polyangiitis (Churg-Strauss syndrome). Renal-limited ANCA-associated vasculitides can be considered the fourth entity. Despite their rarity and still unknown cause(s), research pertaining to ANCA-associated vasculitides has been very active over the past decades. The pathogenic role of antimyeloperoxidase ANCA (MPO-ANCA) has been supported using several animal models, but that of antiproteinase 3 ANCA (PR3-ANCA) has not been as strongly demonstrated. Moreover, some MPO-ANCA subsets, which are directed against a few specific MPO epitopes, have recently been found to be better associated with disease activity, but a different method than the one presently used in routine detection is required to detect them. B cells possibly play a major role in the pathogenesis because they produce ANCAs, as well as neutrophil abnormalities and imbalances in different T-cell subtypes [T helper (Th)1, Th2, Th17, regulatory cluster of differentiation (CD)4+ CD25+ forkhead box P3 (FoxP3)+ T cells] and/or cytokine-chemokine networks. The alternative complement pathway is also involved, and its blockade has been shown to prevent renal disease in an MPO-ANCA murine model. Other recent studies suggested strongest genetic associations by ANCA type rather than by clinical diagnosis. The induction treatment for severe granulomatosis with polyangiitis and microscopic polyangiitis is relatively well codified but does not (yet) really differ by precise diagnosis or ANCA type. It comprises glucocorticoids combined with another immunosuppressant, cyclophosphamide or rituximab. The choice between the two immunosuppressants must consider the comorbidities, past exposure to cyclophosphamide for relapsers, plans for pregnancy, and also the cost of rituximab. Once remission is achieved, maintenance strategy following cyclophosphamide-based induction relies on less toxic agents such as azathioprine or methotrexate. The optimal maintenance strategy following rituximab-based induction therapy remains to be determined. Preliminary results on rituximab for maintenance therapy appear promising. Efforts are still under way to determine the optimal duration of maintenance therapy, ideally tailored according to the characteristics of each patient and the previous treatment received.
LEADER-MEMBER EXCHANGE AS A MEDIATOR OF THE RELATIONSHIP BETWEEN TRANSFORMATIONAL LEADERSHIP AND FOLLOWERS ’ PERFORMANCE AND ORGANIZATIONAL CITIZENSHIP BEHAVIOR
We developed a model in which leader-member exchange mediated between perceived transformational leadership behaviors and followers’ task performance and organizational citizenship behaviors. Our sample comprised 162 leader-follower dyads within organizations situated throughout the People’s Republic of China. We showed that leader-member exchange fully mediated between transformational leadership and task performance as well as organizational citizenship behaviors. Implications for the theory and practice of leadership are discussed, and future research directions offered.
Death By Crucifixion : View of The Medicolegal Expert
Being a corner stone of the New testament and Christian religion, the evangelical narration about Jesus Christ crucifixion had been drawing attention of many millions people, both Christians and representatives of other religions and convictions, almost for two thousand years.If in the last centuries the crucifixion was considered mainly from theological and historical positions, the XX century was marked by surge of medical and biological researches devoted to investigation of thanatogenesis of the crucifixion. However the careful analysis of the suggested concepts of death at the crucifixion shows that not all of them are well-founded. Moreover, some authors sometimes do not consider available historic facts.Not only the analysis of the original Greek text of the Gospel is absent in the published works but authors ignore the Gospel itself at times.
Nymble: a High-Performance Learning Name-finder
This paper presents a statistical, learned approach to finding names and other nonrecursive entities in text (as per the MUC-6 definition of the NE task), using a variant of the standard hidden Markov model. We present our justification for the problem and our approach, a detailed discussion of the model itself and finally the successful results of this new approach.
A Real Time Audio Fingerprinting System for Advertisement Tracking and Reporting in FM Radio
In this paper we present a system designed to detect, identify and track commercial segments transmitted by a radio station. The program is entirely written in Visual C++ and uses state of the art audio fingerprinting technologies to achieve a great performance, being able to operate several times faster than real time while keeping a moderate computational load.
A di/dt Feedback-Based Active Gate Driver for Smart Switching and Fast Overcurrent Protection of IGBT Modules
This paper presents an active gate driver (AGD) for IGBT modules to improve their overall performance under normal condition as well as fault condition. Specifically, during normal switching transients, a di/dt feedback controlled current source and current sink is introduced together with a push-pull buffer for dynamic gate current control. Compared to a conventional gate drive strategy, the proposed one has the capability of reducing the switching loss, delay time, and Miller plateau duration during turn-on and turn-off transient without sacrificing current and voltage stress. Under overcurrent condition, it provides a fast protection function for IGBT modules based on the evaluation of fault current level through the di/dt feedback signal. Moreover, the AGD features flexible protection modes, which overcomes the interruption of converter operation in the event of momentary short circuits. A step-down converter is built to evaluate the performance of the proposed driving schemes under various conditions, considering variation of turn-on/off gate resistance, current levels, and short-circuit fault types. Experimental results and detailed analysis are presented to verify the feasibility of the proposed approach.
Developing mental health mobile apps: Exploring adolescents' perspectives
Mobile applications or 'apps' have significant potential for use in mental health interventions with adolescents. However, there is a lack of research exploring end users' needs from such technologies. The aim of this study was to explore adolescents' needs and concerns in relation to mental health mobile apps. Five focus groups were conducted with young people aged 15-16 years (N = 34, 60% male). Participants were asked about their views in relation to the use of mental health mobile technologies and were asked to give their responses to a mental health app prototype. Participants identified (1) safety, (2) engagement, (3) functionality, (4) social interaction, (5) awareness, (6) accessibility, (7) gender and (8) young people in control as important factors. Understanding end users' needs and concerns in relation to this topic will inform the future development of youth-oriented mental health apps that are acceptable to young people.
Many Facets of Complexity in Logic
There are many ways to define complexity in logic. In finite model theory, it is the complexity of describing properties, whereas in proof complexity it is the complexity of proving properties in a proof system. Here we consider several notions of complexity in logic, the connections among them, and their relationship with computational complexity. In particular, we show how the complexity of logics in the setting of finite model theory is used to obtain results in bounded arithmetic, stating which functions are provably total in certain weak systems of arithmetic. For example, the transitive closure function (testing reachability between two given points in a directed graph) is definable using only NL-concepts (where NL is non-deterministic log-space complexity class), and its totality is provable within NL-reasoning.
Advances in Computational Stereo
Extraction of three-dimensional structure of a scene from stereo images is a problem that has been studied by the computer vision community for decades. Early work focused on the fundamentals of image correspondence and stereo geometry. Stereo research has matured significantly throughout the years, and many advances in computational stereo continue to be made, allowing stereo to be applied to new and more demanding problems. In this paper, we review recent advances in computational stereo, focusing primarily on three important topics: correspondence methods, methods for occlusion and real-time implementations. Throughout, we present tables that summarize and draw distinctions among key ideas and approaches. Where available, we provide comparative analyses, and we make suggestions for analyses yet to be
GAMIFICATION IN LOGISTICS AND SUPPLY CHAIN EDUCATION : EXTENDING ACTIVE LEARNING
Engagement with users involved in an activity has become increasingly important, particularly in Higher Education. We review the concept of gamification and outline several existing applications. These incorporate game elements into existing systems and tasks in a way that increases user engagement in the process. Current approaches in logistics and supply chain education are discussed in relation to active learning. We develop a framework that combines several gamification elements that can be relatively easily incorporated into existing approaches and learning management systems (LMSs) in ways that aims to increase engagement and extend active learning. This framework and the relationship between the elements provide fertile ground for further research.
Evaluation of Water Quality: Physico - Chemical Characteristics of Ganga River at Kanpur by using Correlation Study
We present an extensive investigation of physico-chemical parameters of water samples of Ganga river at Kanpur. Water samples under investigations were collected from Jalsansthan Benajhawar Kanpur sampling station during pre monsoon (April May), monsoon (July August) and post monsoon (October November) seasons in the year 2008. Correlation coefficients were calculated between different pairs of parameters to identify the highly correlated and interrelated water quality parameters and t-test was applied for checking significance. The observed values of different physico-chemical parameters like pH, temperature, turbidity , total hardness(TH) , Iron , Chloride , total dissolved solids(TDS) , Ca, Mg , SO4 , NO3, F, total alkalinity (TA) , Oxygen consumption (OC), Suspended solids (SS) of samples were compared with standard values recommended by world health organization (WHO). It is found that significant positive correlation holds for TA with Cl, Mg, Ca, TH, TDS, fluoride and OC. A significant negative correlation was found between SS with chloride, Mg, TDS, fluoride and OC. All the physico chemical parameters for pre monsoon, monsoon and post monsoon seasons are within the highest desirable or maximum permissible limit set by WHO except turbidity which was high while NO3 , Cl and F are less than the values prescribed by WHO. [Nature and Science. 2009;1(6):91-94]. (ISSN: 1545-0740).
Swimming and Persons with Mild Persistant Asthma
The aim of our study was to analyze the effect of recreational swimming on lung function and bronchial hyperresponsiveness (BHR) in patients with mild persistent asthma. This study included 65 patients with mild persistent asthma, who were divided into two groups: experimental group A (n = 45) and control group B (n = 20). Patients from both groups were treated with low doses of inhaled corticosteroids (ICS) and short-acting beta2 agonists salbutamol as needed. Our program for patients in group A was combined asthma education with swimming (twice a week on a 1-h basis for the following 6 months). At the end of the study, in Group A, we found a statistically significant increase of lung function parameters FEV1 (forced expiratory volume in 1 sec) (3.55 vs. 3.65) (p < 0.01), FVC (forced vital capacity) (4.27 vs. 4.37) (p < 0.05), PEF (peak expiratory flow) (7.08 vs. 7.46) (p < 0.01), and statistically significant decrease of BHR (PD20 0.58 vs. 2.01) (p < 0.001). In Group B, there was a statistically significant improvement of FEV1 3.29 vs. 3.33 (p < 0.05) and although FVC, FEV1/FVC, and PEF were improved, it was not significant. When Groups A and B were compared at the end of the study, there was a statistically significant difference of FVC (4.01 vs. 4.37), FEV1 (3.33 vs. 3.55), PEF (6.79 vs.7.46), and variability (p < 0.001), and statistically significantly decreased BHR in Group A (2.01 vs. 1.75) (p < 0.001). Engagement of patients with mild persistent asthma in recreational swimming in nonchlorinated pools, combined with regular medical treatment and education, leads to better improvement of their parameters of lung function and also to more significant decrease of their airway hyperresponsiveness compared to patients treated with traditional medicine.
How effective is the Grey Wolf optimizer in training multi-layer perceptrons
This paper employs the recently proposed Grey Wolf Optimizer (GWO) for training Multi-Layer Perceptron (MLP) for the first time. Eight standard datasets including five classification and three function-approximation datasets are utilized to benchmark the performance of the proposed method. For verification, the results are compared with some of the most well-known evolutionary trainers: Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Evolution Strategy (ES), and Population-based Incremental Learning (PBIL). The statistical results prove the GWO algorithm is able to provide very competitive results in terms of improved local optima avoidance. The results also demonstrate a high level of accuracy in classification and approximation of the proposed trainer.
Information Systems Security Design Methods: Implications for Information Systems Development
The security of information systems is a serious issue because computer abuse is increasing. It is important, therefore, that systems analysts and designers develop expertise in methods for specifying information systems security. The characteristics found in three generations of general information system design methods provide a framework for comparing and understanding current security design methods. These methods include approaches that use checklists of controls, divide functional requirements into engineering partitions, and create abstract models of both the problem and the solution. Comparisons and contrasts reveal that advances in security methods lag behind advances in general systems development methods. This analysis also reveals that more general methods fail to consider security specifications rigorously.
Linking Cybersecurity Knowledge : Cybersecurity Information Discovery Mechanism
To cope with increasing amount of cyber threats, organizations need to share cybersecurity information beyond the borders of organizations, countries, and even languages. Assorted organizations built repositories that store and provide XML-based cybersecurity information on the Internet. Among them are NVD [1], OSVDB [2], and JVN [3], and more cybersecurity information from various organizations from various countries will be available in the Internet. However, users are unaware of all of them. To advance information sharing, users need to be aware of them and be capable of identifying and locating cybersecurity information across such repositories by the parties who need that, and then obtaining the information over networks. This paper proposes a discovery mechanism, which identifies and locates sources and types of cybersecurity information and exchanges the information over networks. The mechanism uses the ontology of cybersecurity information [4] to incorporate assorted format of such information so that it can maintain future extensibility. It generates RDF-based metadata from XML-based cybersecurity information through the use of XSLT. This paper also introduces an implementation of the proposed mechanism and discusses extensibility and usability of the proposed mechanism.
Exploiting Similarities among Languages for Machine Translation
Dictionaries and phrase tables are the basis of modern statistical machine translation systems. This paper develops a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data. It uses distributed representation of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs.
Maintenance treatment for old-age depression preserves health-related quality of life: a randomized, controlled trial of paroxetine and interpersonal psychotherapy.
OBJECTIVES To determine whether maintenance antidepressant pharmacotherapy and interpersonal psychotherapy sustain gains in health-related quality of life (HR-QOL) achieved during short-term treatment in older patients with depression. DESIGN After open combined treatment with paroxetine and interpersonal psychotherapy, responders were randomly assigned to a two (paroxetine vs placebo) by two (monthly interpersonal psychotherapy vs clinical management) double-blind, placebo-controlled maintenance trial. HR-QOL outcomes were assessed over 1 year. SETTING University-based clinic. PATIENTS Of the referred sample of 363 persons aged 70 and older with major depression, 210 gave consent, and 195 started acute treatment; 116 met criteria for recovery, entered maintenance treatment, and were included in this analysis. INTERVENTIONS Paroxetine; monthly manual-based interpersonal psychotherapy. MEASUREMENTS Overall HR-QOL as measured using the Quality of Well-Being Scale (QWB) and six specific HR-QOL domains derived from the Medical Outcomes Study 36-item Short-Form Health Survey (SF-36) subscales. RESULTS All domains of HR-QOL except physical functioning improved with successful acute and continuation treatment. After controlling for any effects of psychotherapy, pharmacotherapy was superior to placebo in preserving overall well-being (P=.04, effect size (r)=0.23), social functioning (P=.02, r=0.27), and role limitations due to emotional problems (P=.007, r=0.30). Interpersonal psychotherapy (controlling for the effects of pharmacotherapy) did not preserve HR-QOL better than supportive clinical management. CONCLUSION Maintenance antidepressant pharmacotherapy is superior to placebo in preserving improvements in overall well-being achieved with treatment response in late-life depression. No such benefit was seen with interpersonal psychotherapy.
A Systematic Review of Research on Open Source Software in Commercial Software Product Development
Background: The popularity of the open source software development in the last decade, has brought about an increased interest from the industry on how to use open source components, participate in the open source community, build business models around this type of software development, and learn more about open source development methodologies. Aim: The aim of this study is to review research carried out on usage of open source components and development methodologies by the industry, as well as companies’ participation in the open source community. Method: Systematic review through searches in library databases and manual identification of articles from the open source conference. Results: 19 articles were identified. Conclusions: The articles could be divided into four categories: open source as part of component based software engineering, business models with open source in commercial organization, company participation in open source development communities, and usage of open source processes within a company.
Developing web services choreography standards - the case of REST vs. SOAP
Developing Web Services Choreography Standards – The Case of REST vs. SOAP Michael zur Muehlen, Jeffrey V. Nickerson, Keith D. Swenson a Wesley J. Howe School of Technology Management Stevens Institute of Technology Castle Point on the Hudson Hoboken, NJ 07030, USA {mzurmuehlen|jnickerson}@stevens.edu b Fujitsu Software Corporation 3055 Orchard Drive San Jose, CA, 95134, USA [email protected] Abstract This paper presents a case study of the development of standards in the area of cross-organizational workflows based on web services. We discuss two opposing types of standards: those based on SOAP, with tightly coupled designs similar to remote procedure calls, and those based on REST, with loosely coupled designs similar to the navigating of web links. We illustrate the standardization process, clarify the technical underpinnings of the conflict, and analyze the interests of stakeholders. The decision criteria for each group of stakeholders are discussed. Finally, we present implications for both the workflow and the wider Internet communities.
Optimal Real-Time Pricing Algorithm Based on Utility Maximization for Smart Grid
In this paper, we consider a smart power infrastructure, where several subscribers share a common energy source. Each subscriber is equipped with an energy consumption controller (ECC) unit as part of its smart meter. Each smart meter is connected to not only the power grid but also a communication infrastructure such as a local area network. This allows two-way communication among smart meters. Considering the importance of energy pricing as an essential tool to develop efficient demand side management strategies, we propose a novel real-time pricing algorithm for the future smart grid. We focus on the interactions between the smart meters and the energy provider through the exchange of control messages which contain subscribers' energy consumption and the real-time price information. First, we analytically model the subscribers' preferences and their energy consumption patterns in form of carefully selected utility functions based on concepts from microeconomics. Second, we propose a distributed algorithm which automatically manages the interactions among the ECC units at the smart meters and the energy provider. The algorithm finds the optimal energy consumption levels for each subscriber to maximize the aggregate utility of all subscribers in the system in a fair and efficient fashion. Finally, we show that the energy provider can encourage some desirable consumption patterns among the subscribers by means of the proposed real-time pricing interactions. Simulation results confirm that the proposed distributed algorithm can potentially benefit both subscribers and the energy provider.
The «Economic» and the «Social»: Autonomy of Spheres and Disciplinary Boundaries
Research agenda of the New Economic Sociology since its emergence in the 1970—1980s can be described as a «negative program», focused mainly on critique of economics. Nowadays many economic sociologists as well as sympathizing economists observe theoretical crisis of the negative program. Drawing on the works by M. Weber and K. Polanyi it is shown that fundamental drawbacks of the particular economic-sociological model of explanation arise from the widespread belief of economic sociologists that economic action is a form of social action. The author argues that the problematic relation between «economic» and «social» shouldn’t serve for drawing aprioristic disciplinary boundaries between economics and economic sociology. Instead the suggestion is put forward to make this relation the subject of economic-sociological study and indicate some fresh and important theoretical tools for such an agenda.
A simulation study of the number of events per variable in logistic regression analysis.
We performed a Monte Carlo study to evaluate the effect of the number of events per variable (EPV) analyzed in logistic regression analysis. The simulations were based on data from a cardiac trial of 673 patients in which 252 deaths occurred and seven variables were cogent predictors of mortality; the number of events per predictive variable was (252/7 =) 36 for the full sample. For the simulations, at values of EPV = 2, 5, 10, 15, 20, and 25, we randomly generated 500 samples of the 673 patients, chosen with replacement, according to a logistic model derived from the full sample. Simulation results for the regression coefficients for each variable in each group of 500 samples were compared for bias, precision, and significance testing against the results of the model fitted to the original sample. For EPV values of 10 or greater, no major problems occurred. For EPV values less than 10, however, the regression coefficients were biased in both positive and negative directions; the large sample variance estimates from the logistic model both overestimated and underestimated the sample variance of the regression coefficients; the 90% confidence limits about the estimated values did not have proper coverage; the Wald statistic was conservative under the null hypothesis; and paradoxical associations (significance in the wrong direction) were increased. Although other factors (such as the total number of events, or sample size) may influence the validity of the logistic model, our findings indicate that low EPV can lead to major problems.
Cannabis use amongst patients with inflammatory bowel disease.
BACKGROUND Experimental evidence suggests the endogenous cannabinoid system may protect against colonic inflammation, leading to the possibility that activation of this system may have a therapeutic role in inflammatory bowel disease (IBD). Medicinal use of cannabis for chronic pain and other symptoms has been reported in a number of medical conditions. We aimed to evaluate cannabis use in patients with IBD. METHODS One hundred patients with ulcerative colitis (UC) and 191 patients with Crohn's disease (CD) attending a tertiary-care outpatient clinic completed a questionnaire regarding current and previous cannabis use, socioeconomic factors, disease history and medication use, including complimentary alternative medicines. Quality of life was assessed using the short-inflammatory bowel disease questionnaire. RESULTS A comparable proportion of UC and CD patients reported lifetime [48/95 (51%) UC vs. 91/189 (48%) CD] or current [11/95 (12%) UC vs. 30/189 (16%) CD] cannabis use. Of lifetime users, 14/43 (33%) UC and 40/80 (50%) CD patients have used it to relieve IBD-related symptoms, including abdominal pain, diarrhoea and reduced appetite. Patients were more likely to use cannabis for symptom relief if they had a history of abdominal surgery [29/48 (60%) vs. 24/74 (32%); P=0.002], chronic analgesic use [29/41 (71%) vs. 25/81 (31%); P<0.001], complimentary alternative medicine use [36/66 (55%) vs. 18/56 (32%); P=0.01] and a lower short inflammatory bowel disease questionnaire score (45.1±2.1 vs. 50.3±1.5; P=0.03). Patients who had used cannabis [60/139 (43%)] were more likely than nonusers [13/133 (10%); P<0.001 vs. users] to express an interest in participating in a hypothetical therapeutic trial of cannabis for IBD. CONCLUSION Cannabis use is common amongst patients with IBD for symptom relief, particularly amongst those with a history of abdominal surgery, chronic abdominal pain and/or a low quality of life index. The therapeutic benefits of cannabinoid derivatives in IBD may warrant further exploration.
Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy
Empathy allows us to simulate others' affective and cognitive mental states internally, and it has been proposed that the mirroring or motor representation systems play a key role in such simulation. As emotions are related to important adaptive events linked with benefit or danger, simulating others' emotional states might constitute of a special case of empathy. In this functional magnetic resonance imaging (fMRI) study we tested if emotional versus cognitive empathy would facilitate the recruitment of brain networks involved in motor representation and imitation in healthy volunteers. Participants were presented with photographs depicting people in neutral everyday situations (cognitive empathy blocks), or suffering serious threat or harm (emotional empathy blocks). Participants were instructed to empathize with specified persons depicted in the scenes. Emotional versus cognitive empathy resulted in increased activity in limbic areas involved in emotion processing (thalamus), and also in cortical areas involved in face (fusiform gyrus) and body perception, as well as in networks associated with mirroring of others' actions (inferior parietal lobule). When brain activation resulting from viewing the scenes was controlled, emotional empathy still engaged the mirror neuron system (premotor cortex) more than cognitive empathy. Further, thalamus and primary somatosensory and motor cortices showed increased functional coupling during emotional versus cognitive empathy. The results suggest that emotional empathy is special. Emotional empathy facilitates somatic, sensory, and motor representation of other peoples' mental states, and results in more vigorous mirroring of the observed mental and bodily states than cognitive empathy.
MAUI: making smartphones last longer with code offload
This paper presents MAUI, a system that enables fine-grained energy-aware offload of mobile code to the infrastructure. Previous approaches to these problems either relied heavily on programmer support to partition an application, or they were coarse-grained requiring full process (or full VM) migration. MAUI uses the benefits of a managed code environment to offer the best of both worlds: it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer. MAUI decides at run-time which methods should be remotely executed, driven by an optimization engine that achieves the best energy savings possible under the mobile device's current connectivity constrains. In our evaluation, we show that MAUI enables: 1) a resource-intensive face recognition application that consumes an order of magnitude less energy, 2) a latency-sensitive arcade game application that doubles its refresh rate, and 3) a voice-based language translation application that bypasses the limitations of the smartphone environment by executing unsupported components remotely.
"To standardize or not to standardize?" - understanding the effect of business process complexity on business process standardization
Today, practitioners often have to face a number of challenges during the standardization of business processes, and some processes can be standardized easier (with less effort) than others. Our previous research has shown that major drivers of successful business process standardization are the characteristics respectively the complexity of a particular business process. In order to minimize standardization effort, we need an instrument that allows identifying processes which are appropriate for standardization by assessing each process’ individual degree of complexity. On the way towards such an instrument, the first step is to develop an understanding of how the complexity of a business process affects its standardization. Therefore, the main aim of this paper is twofold: First, we provide a research model representing the fundamental relationships between our main constructs standardization effort, process complexity, and process standardization. Second, we report on the development of valid measurement scales designed to measure these constructs.
Associative Embedding: End-to-End Learning for Joint Detection and Grouping
We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets.
Privacy-preserving Machine Learning through Data Obfuscation
As machine learning becomes a practice and commodity, numerous cloud-based services and frameworks are provided to help customers develop and deploy machine learning applications. While it is prevalent to outsource model training and serving tasks in the cloud, it is important to protect the privacy of sensitive samples in the training dataset and prevent information leakage to untrusted third parties. Past work have shown that a malicious machine learning service provider or end user can easily extract critical information about the training samples, from the model parameters or even just model outputs. In this paper, we propose a novel and generic methodology to preserve the privacy of training data in machine learning applications. Specifically we introduce a obfuscate function and apply it to the training data before feeding them to the model training task. This function adds random noise to existing samples, or augments the dataset with new samples. By doing so sensitive information about the properties of individual samples, or statistical properties of a group of samples, is hidden. Meanwhile the model trained from the obfuscated dataset can still achieve high accuracy. With this approach, the customers can safely disclose the data or models to third-party providers or end users without the need to worry about data privacy. Our experiments show that this approach can effective defeat four existing types of machine learning privacy attacks at negligible accuracy cost.
Metabolic responses to salt stress of barley (Hordeum vulgare L.) cultivars, Sahara and Clipper, which differ in salinity tolerance
Plants show varied cellular responses to salinity that are partly associated with maintaining low cytosolic Na(+) levels and a high K(+)/Na(+) ratio. Plant metabolites change with elevated Na(+), some changes are likely to help restore osmotic balance while others protect Na(+)-sensitive proteins. Metabolic responses to salt stress are described for two barley (Hordeum vulgare L.) cultivars, Sahara and Clipper, which differed in salinity tolerance under the experimental conditions used. After 3 weeks of salt treatment, Clipper ceased growing whereas Sahara resumed growth similar to the control plants. Compared with Clipper, Sahara had significantly higher leaf Na(+) levels and less leaf necrosis, suggesting they are more tolerant to accumulated Na(+). Metabolite changes in response to the salt treatment also differed between the two cultivars. Clipper plants had elevated levels of amino acids, including proline and GABA, and the polyamine putrescine, consistent with earlier suggestions that such accumulation may be correlated with slower growth and/or leaf necrosis rather than being an adaptive response to salinity. It is suggested that these metabolites may be an indicator of general cellular damage in plants. By contrast, in the more tolerant Sahara plants, the levels of the hexose phosphates, TCA cycle intermediates, and metabolites involved in cellular protection increased in response to salt. These solutes remain unchanged in the more sensitive Clipper plants. It is proposed that these responses in the more tolerant Sahara are involved in cellular protection in the leaves and are involved in the tolerance of Sahara leaves to high Na(+).
Early sensitivity to arguments: how preschoolers weight circular arguments.
Observational studies suggest that children as young as 2 years can evaluate some of the arguments people offer them. However, experimental studies of sensitivity to different arguments have not yet targeted children younger than 5 years. The current study aimed at bridging this gap by testing the ability of preschoolers (3-, 4-, and 5-year-olds) to weight arguments. To do so, it focused on a common type of fallacy-circularity-to which 5-year-olds are sensitive. The current experiment asked children-and, as a group control, adults-to choose between the contradictory opinions of two speakers. In the first task, participants of all age groups favored an opinion supported by a strong argument over an opinion supported by a circular argument. In the second task, 4- and 5-year-olds, but not 3-year-olds or adults, favored the opinion supported by a circular argument over an unsupported opinion. We suggest that the results of these tasks in 3- to 5-year-olds are best interpreted as resulting from the combination of two mechanisms: (a) basic skills of argument evaluations that process the content of arguments, allowing children as young as 3 years to favor non-circular arguments over circular arguments, and (b) a heuristic that leads older children (4- and 5-year-olds) to give some weight to circular arguments, possibly by interpreting these arguments as a cue to speaker dominance.
Building a database on S3
There has been a great deal of hype about Amazon's simple storage service (S3). S3 provides infinite scalability and high availability at low cost. Currently, S3 is used mostly to store multi-media documents (videos, photos, audio) which are shared by a community of people and rarely updated. The purpose of this paper is to demonstrate the opportunities and limitations of using S3 as a storage system for general-purpose database applications which involve small objects and frequent updates. Read, write, and commit protocols are presented. Furthermore, the cost ($), performance, and consistency properties of such a storage system are studied.
Pathology , pseudopathology , and the Dark Triad of personality
The Dark Triad traits (i.e., psychopathy, Machiavellianism, and narcissism) have traditionally been viewed as undesirable and pathological. In contrast, an evolutionary perspective suggests that traits like these might be pseudopathologies; traits that society actively dislikes in that they pose a threat to the collective good. We examined (N = 290) how the Dark Triad traits related to intrapersonal (i.e., behavioral dysfunction), quasibehavioral (i.e., reactive and proactive aggression), and interpersonal (i.e., communal and exchange orientation) factors. Psychopathy predicted high rates of behavioral dysregulation and both forms of aggression. Psychopathy and Machiavellianism showed an aversion towards communalism but an exchange orientation to social relationships. Lastly, individual differences in the Dark Triad traits accounted for part (5–22%) of the sex differences in social strategies and aggression. The theoretical implications of these findings are discussed in, and in support of, an evolutionary paradigm. 2015 Elsevier Ltd. All rights reserved.
Health economic evaluation of acupuncture along meridians for treating migraine in China: results from a randomized controlled trial
BACKGROUND To evaluate different types of acupuncture treatment for migraine in China from the perspective of health economics, particularly the comparison between treatment of specific acupoints in Shaoyang meridians and penetrating sham acupoints treatment. METHODS Data were obtained from a multicenter, randomized controlled trial of acupuncture treatment in patients with migraine. Four-hundred eighty migraineurs were randomly assigned to 3 arms of treatment with genuine acupoints and 1 arm of penetrating sham acupoints. The primary outcome measurement was the cost-effectiveness ratio (C/E), expressed as cost per 1 day reduction of headache days from baseline to week 16. Cost-comparison analyses, differences in the migraine-specific quality of life questionnaire (MSQ), and the incremental cost-effectiveness ratio were taken as secondary outcome measurements. In addition, a sensitivity analysis was conducted. RESULTS The total cost per patient was ¥1273.2 (95% CI 1171.3-1375.1) in the Shaoyang specific group, ¥1427.7 (95% CI 1311.8-1543.6) in the Shaoyang non-specific group, ¥1490.8 (95% CI 1327.1-1654.6) in the Yangming specific group, and ¥1470.1 (95% CI 1358.8-1581.3) in the sham acupuncture group. The reduced days with migraine were 3.972 ± 2.7, 3.555 ± 2.8, 3.793 ± 3.6, and 2.155 ± 3.7 in these 4 groups (P < 0.05 for each genuine acupoints group vs the sham group), respectively, at week 16. The C/Es of the 4 groups were 320.5, 401.6, 393.1, and 682.2, respectively. Results of the sensitivity analysis were consistent with that of the cost-effectiveness analysis. The Shaoyang specific group significantly improved in all 3 MSQ domains compared with the sham acupuncture group. CONCLUSIONS Treatment of specific acupoints in Shaoyang meridians is more cost-effective than that of non-acupoints, representing a dramatic improvement in the quality of life of people with migraine and a significant reduction in cost. Compared with the other 3 groups, Shaoyang-specific acupuncture is a relatively cost-effective treatment for migraine prophylaxis in China. TRIAL REGISTRATION Clinical Trials.gov NCT00599586.
A Pilot Arabic Propbank
In this paper, we present the details of creating a pilot Arab ic proposition bank (Propbank). Propbanks exist for both En glish and Chinese. However the morphological and syntactic expression of ling uistic phenomena in Arabic yield a very different type of pro cess in creating an Arabic propbank. Hence, we highlight those characterist ics of Arabic that make creating a propbank for the language a different challenge compared to the creation of an English Propbank.W e believe that many of the lessons learned in dealing with Ara bic could generalise to other languages that exhibit equally rich mor ph logy and relatively free word order.
Softmax Q-Distribution Estimation for Structured Prediction: A Theoretical Interpretation for RAML
Reward augmented maximum likelihood (RAML) is a simple and effective learning framework to directly optimize towards the reward function in structured prediction tasks. RAML incorporates task-specific reward by performing maximumlikelihood updates on candidate outputs sampled according to an exponentiated payoff distribution, which gives higher probabilities to candidates that are close to the reference output. While RAML is notable for its simplicity, efficiency, and its impressive empirical successes, the theoretical properties of RAML, especially the behavior of the exponentiated payoff distribution, has not been examined thoroughly. In this work, we introduce softmax Q-distribution estimation, a novel theoretical interpretation of RAML, which reveals the relation between RAML and Bayesian decision theory. The softmax Q-distribution can be regarded as a smooth approximation of Bayes decision boundary, and the Bayes decision rule is achieved by decoding with this Q-distribution. We further show that RAML is equivalent to approximately estimating the softmax Q-distribution. Experiments on three structured prediction tasks with rewards defined on sequential (named entity recognition), tree-based (dependency parsing) and irregular (machine translation) structures show notable improvements over maximum likelihood baselines.
Exposing native device APIs to web apps
A recent survey among developers revealed that half plan to use HTML5 for mobile apps in the future. An earlier survey showed that access to native device APIs is the biggest shortcoming of HTML5 compared to native apps. Several different approaches exist to overcome this limitation, among them cross-compilation and packaging the HTML5 as a native app. In this paper we propose a novel approach by using a device-local service that runs on the smartphone and that acts as a gateway to the native layer for HTML5-based apps running inside the standard browser. WebSockets are used for bi-directional communication between the web apps and the device-local service. The service approach is a generalization of the packaging solution. In this paper we describe our approach and compare it with other popular ways to grant web apps access to the native API layer of the operating system.
Neural Task Programming: Learning to Generalize Across Hierarchical Tasks
In this work, we propose a novel robot learning framework called Neural Task Programming (NTP), which bridges the idea of few-shot learning from demonstration and neural program induction. NTP takes as input a task specification (e.g., video demonstration of a task) and recursively decomposes it into finer sub-task specifications. These specifications are fed to a hierarchical neural program, where bottom-level programs are callable subroutines that interact with the environment. We validate our method in three robot manipulation tasks. NTP achieves strong generalization across sequential tasks that exhibit hierarchal and compositional structures. The experimental results show that NTP learns to generalize well towards unseen tasks with increasing lengths, variable topologies, and changing objectives.stanfordvl.github.io/ntp/.
Role of FDG-PET in the implementation of involved-node radiation therapy for Hodgkin lymphoma patients.
PURPOSE This study examines the role of (18)F-labeled fluorodeoxyglucose positron emission tomography (FDG-PET) in the implementation of involved-node radiation therapy (INRT) in patients treated for clinical stages (CS) I/II supradiaphragmatic Hodgkin lymphoma (HL). METHODS AND MATERIAL Patients with untreated CS I/II HL enrolled in the randomized EORTC/LYSA/FIL Intergroup H10 trial and participating in a real-time prospective quality assurance program were prospectively included in this study. Data were electronically obtained from 18 French cancer centers. All patients underwent APET-computed tomography (PET-CT) and a post-chemotherapy planning CT scanning. The pre-chemotherapy gross tumor volume (GTV) and the postchemotherapy clinical target volume (CTV) were first delineated on CT only by the radiation oncologist. The planning PET was then co-registered, and the delineated volumes were jointly analyzed by the radiation oncologist and the nuclear medicine physician. Lymph nodes undetected on CT but FDG-avid were recorded, and the previously determined GTV and CTV were modified according to FDG-PET results. RESULTS From March 2007 to February 2010, 135 patients were included in the study. PET-CT identified at least 1 additional FDG-avid lymph node in 95 of 135 patients (70.4%; 95% confidence interval [CI]: 61.9%-77.9%) and 1 additional lymph node area in 55 of 135 patients (40.7%; 95% CI: 32.4%-49.5%). The mean increases in the GTV and CTV were 8.8% and 7.1%, respectively. The systematic addition of PET to CT led to a CTV increase in 60% of the patients. CONCLUSIONS Pre-chemotherapy FDG-PET leads to significantly better INRT delineation without necessarily increasing radiation volumes.
Relating constraint answer set programming languages and algorithms
Recently a logic programming language AC was proposed by Mellarkod et al. (2008) to integrate answer set programming and constraint logic programming. Soon after that, a clingcon language integrating answer set programming and finite domain constraints, as well as an ezcsp language integrating answer set programming and constraint logic programming were introduced. The development of these languages and systems constitutes the appearance of a new AI subarea called constraint answer set programming. All these languages have something in common. In particular, they aim at developing new efficient inference algorithms that combine traditional answer set programming procedures and other methods in constraint programming. Yet, the exact relation between the constraint answer set programming languages and the underlying systems is not well understood. In this paper we address this issue by formally stating the precise relation between several constraint answer set programming languages – AC , clingcon, ezcsp – as well as the underlying systems.
Training Generative Adversarial Networks with Binary Neurons by End-to-end Backpropagation
We propose the BinaryGAN, a novel generative adversarial network (GAN) that uses binary neurons at the output layer of the generator. We employ the sigmoidadjusted straight-through estimators to estimate the gradients for the binary neurons and train the whole network by end-to-end backpropogation. The proposed model is able to directly generate binary-valued predictions at test time. We implement such a model to generate binarized MNIST digits and experimentally compare the performance for different types of binary neurons, GAN objectives and network architectures. Although the results are still preliminary, we show that it is possible to train a GAN that has binary neurons and that the use of gradient estimators can be a promising direction for modeling discrete distributions with GANs. For reproducibility, the source code is available at https://github.com/salu133445/binarygan.
Combining (Integer) Linear Programming Techniques and Metaheuristics for Combinatorial Optimization
Several different ways exist for approaching hard optimization problems. Mathematical programming techniques, including (integer) linear programming based methods, and metaheuristic approaches are two highly successful streams for combinatorial problems. These two have been established by different communities more or less in isolation from each other. Only over the last years a larger number of researchers recognized the advantages and huge potentials of building hybrids of mathematical programming methods and metaheuristics. In fact, many problems can be practically solved much better by exploiting synergies between these different approaches than by “pure” traditional algorithms. The crucial issue is how mathematical programming methods and metaheuristics should be combined for achieving those benefits. Many approaches have been proposed in the last few years. After giving a brief introduction to the basics of integer linear programming, this chapter surveys existing techniques for such combinations and classifies them into ten methodological categories.
Assessing a Firm's Web Presence: A Heuristic Evaluation Procedure for the Measurement of Usability
Web site usability is a critical metric for assessing the quality of a firm’s Web presence. A measure of usability must not only provide a global rating for a specific Web site, ideally it should also illuminate specific strengths and weaknesses associated with site design. In this paper, we describe a heuristic evaluation procedure for examining the usability of Web sites. The procedure utilizes a comprehensive set of usability guidelines developed by Microsoft. We present the categories and subcategories comprising these guidelines, and discuss the development of an instrument that operationalizes the measurement of usability. The proposed instrument was tested in a heuristic evaluation study where 1,475 users rated multiple Web sites from four different industry sectors: airlines, online bookstores, automobile manufacturers, and car rental agencies. To enhance the external validity of the study, users were asked to assume the role of a consumer or an investor when assessing usability. Empirical results suggest that the evaluation procedure, the instrument, as well as the usability metric exhibit good properties. Implications of the findings for researchers, for Web site designers, and for heuristic evaluation methods in usability testing are offered. (Usability; Heuristic Evaluation; Microsoft Usability Guidelines; Human-Computer Interaction; Web Interface)
Multi-view Pictorial Structures for 3D Human Pose Estimation
Pictorial structure models are the de facto standard for 2D human pose estimation. Numerous refinements and improvements have been proposed such as discriminatively trained body part detectors, flexible body models and local and global mixtures. While these techniques allow to achieve the state-of-the-art performance for 2D pose estimation, they have not yet been extended to enable pose estimation in 3D, instead this problem is traditionally addressed using 3D body models and involves complex inference in a high-dimensional space of 3D body configurations. We formulate the articulated 3D human pose estimation problem as a joint inference over the set of 2D projections of the pose in each of the camera views. As a first contribution of this paper, we propose a 2D pose estimation approach that extends the state-of-the-art 2D pictorial structures model [6] with flexible parts, color features, multi-modal pairwise terms, and mixtures of pictorial structures. The second and main contribution is to extend this 2D pose estimation model to a multi-view model that performs joint reasoning over people poses seen from multiple viewpoints. The output of this novel model is then used to recover 3D pose. We evaluate our multi-view pictorial structures model on HumanEva-I [8] and MPII Cooking [7] dataset. In comparison to related work for 3D pose estimation our approach achieves similar or better results while operating on single-frames only and not relying on activity specific motion models or tracking. Notably, our approach outperforms state-of-the-art for activities with more complex motions. Single-view model: The pictorial structures model, originally introduced in [2, 3], represents the human body as a configuration L = {l1, . . . , lN} of N rigid parts and a set of pairwise part relationships E. The image position and absolute orientation of each part is given by li = (xi,yi,θi). We formulate the model as a conditional random field, and assume that the probability of the part configuration L given the image evidence I factorizes into a product of unary and pairwise terms:
Discovery is never by chance: designing for (un)serendipity
Serendipity has a long tradition in the history of science as having played a key role in many significant discoveries. Computer scientists, valuing the role of serendipity in discovery, have attempted to design systems that encourage serendipity. However, that research has focused primarily on only one aspect of serendipity: that of chance encounters. In reality, for serendipity to be valuable chance encounters must be synthesized into insight. In this paper we show, through a formal consideration of serendipity and analysis of how various systems have seized on attributes of interpreting serendipity, that there is a richer space for design to support serendipitous creativity, innovation and discovery than has been tapped to date. We discuss how ideas might be encoded to be shared or discovered by 'association-hunting' agents. We propose considering not only the inventor's role in perceiving serendipity, but also how that inventor's perception may be enhanced to increase the opportunity for serendipity. We explore the role of environment and how we can better enable serendipitous discoveries to find a home more readily and immediately.
Mining Associated Text and Images with Dual-Wing Harmoniums
We propose a multi-wing harmonium model for mining multimedia data that extends and improves on earlier models based on two-layer random fields, which capture bidirectional dependencies between hidden topic aspects and observed inputs. This model can be viewed as an undirected counterpart of the two-layer directed models such as LDA for similar tasks, but bears significant difference in inference/learning cost tradeoffs, latent topic representations, and topic mixing mechanisms. In particular, our model facilitates efficient inference and robust topic mixing, and potentially provides high flexibilities in modeling the latent topic spaces. A contrastive divergence and a variational algorithm are derived for learning. We specialized our model to a dual-wing harmonium for captioned images, incorporating a multivariate Poisson for word-counts and a multivariate Gaussian for color histogram. We present empirical results on the applications of this model to classification, retrieval and image annotation on news video collections, and we report an extensive comparison with various extant models.
A Graph Summarization: A Survey
While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly. Thus, efficient computational methods for condensing and simplifying data are becoming vital for extracting actionable insights. In particular, while data summarization techniques have been studied extensively, only recently has summarizing interconnected data, or graphs, become popular. This survey is a structured, comprehensive overview of the state-of-the-art methods for summarizing graph data. We first broach the motivation behind and the challenges of graph summarization. We then categorize summarization approaches by the type of graphs taken as input and further organize each category by core methodology. Finally, we discuss applications of summarization on real-world graphs and conclude by describing some open problems in the field.
Integer linear programming inference for conditional random fields
Inference in Conditional Random Fields and Hidden Markov Models is done using the Viterbi algorithm, an efficient dynamic programming algorithm. In many cases, general (non-local and non-sequential) constraints may exist over the output sequence, but cannot be incorporated and exploited in a natural way by this inference procedure. This paper proposes a novel inference procedure based on integer linear programming (ILP) and extends CRF models to naturally and efficiently support general constraint structures. For sequential constraints, this procedure reduces to simple linear programming as the inference process. Experimental evidence is supplied in the context of an important NLP problem, semantic role labeling.
An improved backpropagation algorithm to avoid the local minima problem
We propose an improved backpropagation algorithm intended to avoid the local minima problem caused by neuron saturation in the hidden layer. Each training pattern has its own activation functions of neurons in the hidden layer. When the network outputs have not got their desired signals, the activation functions are adapted so as to prevent neurons in the hidden layer from saturating. Simulations on some benchmark problems have been performed to demonstrate the validity of the proposed method. c © 2003 Elsevier B.V. All rights reserved.
Stanford's Graph-based Neural Dependency Parser at the CoNLL 2017 Shared Task
This paper describes the neural dependency parser submitted by Stanford to the CoNLL 2017 Shared Task on parsing Universal Dependencies. Our system uses relatively simple LSTM networks to produce part of speech tags and labeled dependency parses from segmented and tokenized sequences of words. In order to address the rare word problem that abounds in languages with complex morphology, we include a character-based word representation that uses an LSTM to produce embeddings from sequences of characters. Our system was ranked first according to all five relevant metrics for the system: UPOS tagging (93.09%), XPOS tagging (82.27%), unlabeled attachment score (81.30%), labeled attachment score (76.30%), and content word labeled attachment score (72.57%).
Induction of robust type-I CD8+ T-cell responses in WHO grade 2 low-grade glioma patients receiving peptide-based vaccines in combination with poly-ICLC.
PURPOSE WHO grade 2 low-grade gliomas (LGG) with high risk factors for recurrence are mostly lethal despite current treatments. We conducted a phase I study to evaluate the safety and immunogenicity of subcutaneous vaccinations with synthetic peptides for glioma-associated antigen (GAA) epitopes in HLA-A2(+) adults with high-risk LGGs in the following three cohorts: (i) patients without prior progression, chemotherapy, or radiotherapy (RT); (ii) patients without prior progression or chemotherapy but with prior RT; and (iii) recurrent patients. EXPERIMENTAL DESIGN GAAs were IL13Rα2, EphA2, WT1, and Survivin. Synthetic peptides were emulsified in Montanide-ISA-51 and given every 3 weeks for eight courses with intramuscular injections of poly-ICLC, followed by q12 week booster vaccines. RESULTS Cohorts 1, 2, and 3 enrolled 12, 1, and 10 patients, respectively. No regimen-limiting toxicity was encountered except for one case with grade 3 fever, fatigue, and mood disturbance (cohort 1). ELISPOT assays demonstrated robust IFNγ responses against at least three of the four GAA epitopes in 10 and 4 cases of cohorts 1 and 3, respectively. Cohort 1 patients demonstrated significantly higher IFNγ responses than cohort 3 patients. Median progression-free survival (PFS) periods since the first vaccine are 17 months in cohort 1 (range, 10-47+) and 12 months in cohort 3 (range, 3-41+). The only patient with large astrocytoma in cohort 2 has been progression-free for more than 67 months since diagnosis. CONCLUSION The current regimen is well tolerated and induces robust GAA-specific responses in WHO grade 2 glioma patients. These results warrant further evaluations of this approach. Clin Cancer Res; 21(2); 286-94. ©2014 AACR.
Dynamic Malware Detection Using API Similarity
Hackers create different types of Malware such as Trojans which they use to steal user-confidential information (e.g. credit card details) with a few simple commands, recent malware however has been created intelligently and in an uncontrolled size, which puts malware analysis as one of the top important subjects of information security. This paper proposes an efficient dynamic malware-detection method based on API similarity. This proposed method outperform the traditional signature-based detection method. The experiment evaluated 197 malware samples and the proposed method showed promising results of correctly identified malware.
England in 1819: The Politics of Literary Culture and the Case of Romantic Historicism
The year 1819 was the "annus mirabilis" for many British Romantic writers, and the "annus terribilis" for demonstrators protesting the state of parliamentary representation. In 1819 Keats wrote what many consider his greatest poetry. This was the year of Shelley's "Prometheus Unbound", "The Cenci", and "Ode to the West Wind." Wordsworth published his most widely reviewed work, "Peter Bell", and the craze for Walter Scott's historical novels reached its zenith. Many of these writings explicitly engaged with the politics of 1819, in particular the great movement for reform that came to a head that August with an unprovoked attack on unarmed men, women, and children in St Peter's Field, Manchester, a massacre that journalists dubbed "Peterloo". But the year of Peterloo in British history is notable for more than just the volume, value, and topicality of its literature. Writing from 1819, as the author argues, was acutely aware not only of its place in history, but also of its place "as" history - a realization of a literary "spirit of the age" that resonates strongly with the current "return to history" in literary studies. Chandler explores the ties between Romantic and contemporary historicism, such as the shared tendency to seize a single dated event as both important on its own and as a "case" testing general principles. To animate these issues, Chandler offers a series of cases of built around key texts from 1819. Like the famous sonnet by Shelley from which it takes its name, this book simultaneously creates and critiques its own place in history. It sets out to be not only a crucial study of Romanticism, but also a major contribution to an understanding of historicism.
Customer Loyalty Explained by Electronic Recovery Service Quality : Implications of the Customer Relationship Re-Establishment for Consumer Electronics E-Tailers
This non-experimental, causal study related to examine and explore the relationships among electronic service quality, customer satisfaction, electronics recovery service quality, and customer loyalty for consumer electronics e-tailers. This study adopted quota and snowball sampling. A total of 121 participants completed the online survey. Out of seven hypotheses in this study, five were supported, whereas two were not supported. Findings indicated that electronic recovery service quality had positive effect on customer loyalty. However, findings indicated that electronic recovery service quality had no effect on perceived value and customer satisfaction. Findings also indicated that perceived value and customer satisfaction were two significant variables that mediated the relationships between electronic service quality and customer loyalty. Moreover, this study found that electronic service quality had no direct effect on customer satisfaction, but had indirect positive effects on customer satisfaction for consumer electronics e-tailers. In terms of practical implications, consumer electronics e-tailers' managers could formulate a competitive strategy based on the modified Electronic Customer Relationship Re-Establishment model to retain current customers and to enhance customer relationship management (CRM). The limitations and recommendations for future research were also included in this study.
Modulation of cellular antioxidant defense activities by sodium arsenite in human fibroblasts
Many studies have shown that oxygen radicals can be produced during arsenic metabolism. We report here that in human fibroblasts (HFW cells) sodium arsenite exposure caused increased formation of fluorescent dichlorofluorescein (DCF) by oxidation of the nonfluorescent form. The enhanced DCF fluorescence was inhibited by a radical scavenger, butylated hydroxytoluene. The effects of sodium arsenite treatment on cellular antioxidant activities were then examined. Treatment of HFW cells with sodium arsenite resulted in a significant increase in heme oxygenase activity and ferritin level. Sodium arsenite-enhanced heme oxygenase synthesis was inhibited by co-treatment of cells with the antioxidants sodium azide and dimethyl sulfoxide. Furthermore, sodium arsenite treatment did not apparently affect glucose-6-phosphate dehydrogenase activity, but resulted in significantly increased glutathione levels and superoxide dismutase activity, slightly decreased glutathione peroxidase activity, and significantly decreased catalase activity. Sodium arsenite toxicity was partly reduced by addition of catalase to the culture medium. These results imply that arsenite can enhance oxidative stress in HFW cells.
An Improvement to Feature Selection of Random Forests on Spark
The Random Forests algorithm belongs to the class of ensemble learning methods, which are common used in classification problem. In this paper, we studied the problem of adopting the Random Forests algorithm to learn raw data from real usage scenario. An improvement, which is stable, strict, high efficient, data-driven, problem independent and has no impact on algorithm performance, is proposed to investigate 2 actual issues of feature selection of the Random Forests algorithm. The first one is to eliminate noisy features, which are irrelevant to the classification. And the second one is to eliminate redundant features, which are highly relevant with other features, but useless. We implemented our improvement approach on Spark. Experiments are performed to evaluate our improvement and the results show that our approach has an ideal performance.
Traffic monitoring and accident detection at intersections
Among the most important research in Intelligent Transportation Systems (ITS) is the development of systems that automatically monitor traffic flow at intersections. Rather than being based on global flow analysis as is currently done, these automatic monitoring systems should be based on local analysis of the behavior of each vehicle at the intersection. The systems should be able to identify each vehicle and track its behavior, and to recognize situations or events that are likely to result from a chain of such behavior. The most difficult problem associated with vehicle tracking is the occlusion effect among vehicles. In order to solve this problem we have developed an algorithm, referred to as spatio-temporal Markov random field (MRF), for traffic images at intersections. This algorithm models a tracking problem by determining the state of each pixel in an image and its transit, and how such states transit along both the – image axes as well as the time axes. Vehicles, of course, are of various shapes and they move in random fashion, thereby leading to full or partial occlusion at intersections. Despite these complications, our algorithm is sufficiently robust to segment and track occluded vehicles at a high success rate of 93%–96%. This success has led to the development of an extendable robust event recognition system based on the hidden Markov model (HMM). The system learns various event behavior patterns of each vehicle in the HMM chains and then, using the output from the tracking system, identifies current event chains. The current system can recognize bumping, passing, and jamming. However, by including other event patterns in the training set, the system can be extended to recognize those other events, e.g., illegal U-turns or reckless driving. We have implemented this system, evaluated it using the tracking results, and demonstrated its effectiveness.
Signaling in Equity Crowdfunding
This paper presents an empirical examination of which start-up signals will small investors to commit financial resources in an equity crowdfunding context. We examine the impact of firms’ financial roadmaps (e.g., preplanned exit strategies such as IPOs or acquisitions), external certification (awards, government grants and patents), internal governance (such as board structure), and risk factors (such as amount of equity offered and the presence of disclaimers) on fundraising success. Our data highlight the importance of financial roadmaps and risk factors, as well as internal governance, for successful equity crowdfunding. External certification, by contrast, has little or no impact on success. We also discuss the implications for successful policy design. JEL Classification: G21, G24, L26
USE OF INDIGENOUSLY DESIGNED NASAL BUBBLE CONTINUOUS POSITIVE AIRWAY PRESSURE (NB-CPAP) IN NEONATES WITH RESPIRATORY DISTRESS - EXPERIENCE FROM A MILITARY HOSPITAL -
Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.
ConvNets with Smooth Adaptive Activation Functions for Regression
Within Neural Networks (NN), the parameters of Adaptive Activation Functions (AAF) control the shapes of activation functions. These parameters are trained along with other parameters in the NN. AAFs have improved performance of Convolutional Neural Networks (CNN) in multiple classification tasks. In this paper, we propose and apply AAFs on CNNs for regression tasks. We argue that applying AAFs in the regression (second-tolast) layer of a NN can significantly decrease the bias of the regression NN. However, using existing AAFs may lead to overfitting. To address this problem, we propose a Smooth Adaptive Activation Function (SAAF) with a piecewise polynomial form which can approximate any continuous function to arbitrary degree of error, while having a bounded Lipschitz constant for given bounded model parameters. As a result, NNs with SAAF can avoid overfitting by simply regularizing model parameters. We empirically evaluated CNNs with SAAFs and achieved state-of-the-art results on age and pose estimation datasets.
Neck infections.
Understanding fascial planes and potential spaces within the neck is integral to determining routes of spread and mandatory when surgical intervention is necessary. Imaging is critical in the stable patient to determine the location and severity of infection as well as provide a guide when surgery is indicated. Pharmacologic treatment initially includes empiric broad-spectrum antibiotics against gram-positive, gram-negative, and anaerobic bacteriadlater refined based on culture and sensitivity results. Surgical intervention is reserved for complicated or unstable patients, or those who are unresponsive to medical therapy.
Patient Prognostic Score and Associations With Survival Improvement Offered by Radiotherapy After Breast-Conserving Surgery for Ductal Carcinoma In Situ: A Population-Based Longitudinal Cohort Study.
PURPOSE Radiotherapy (RT) after breast-conserving surgery (BCS) is a standard treatment option for the management of ductal carcinoma in situ (DCIS). We sought to determine the survival benefit of RT after BCS on the basis of risk factors for local recurrence. PATIENTS AND METHODS A retrospective longitudinal cohort study was performed to identify patients with DCIS diagnosed between 1988 and 2007 and treated with BCS by using SEER data. Patients were divided into the following two groups: BCS+RT (RT group) and BCS alone (non-RT group). We used a patient prognostic scoring model to stratify patients on the basis of risk of local recurrence. We performed a Cox proportional hazards model with propensity score weighting to evaluate breast cancer mortality between the two groups. RESULTS We identified 32,144 eligible patients with DCIS, 20,329 (63%) in the RT group and 11,815 (37%) in the non-RT group. Overall, 304 breast cancer-specific deaths occurred over a median follow-up of 96 months, with a cumulative incidence of breast cancer mortality at 10 years in the weighted cohorts of 1.8% (RT group) and 2.1% (non-RT group; hazard ratio, 0.73; 95% CI, 0.62 to 0.88). Significant improvements in survival in the RT group compared with the non-RT group were only observed in patients with higher nuclear grade, younger age, and larger tumor size. The magnitude of the survival difference with RT was significantly correlated with prognostic score (P < .001). CONCLUSION In this population-based study, the patient prognostic score for DCIS is associated with the magnitude of improvement in survival offered by RT after BCS, suggesting that decisions for RT could be tailored on the basis of patient factors, tumor biology, and the prognostic score.
Constant Communication ORAM with Small Blocksize
There have been several attempts recently at using homomorphic encryption to increase the efficiency of Oblivious RAM protocols. One of the most successful has been Onion ORAM, which achieves O(1) communication overhead with polylogarithmic server computation. However, it has two drawbacks. It requires a large block size of B = Ω(log6 N) with large constants. Moreover, while it only needs polylogarithmic computation complexity, that computation consists mostly of expensive homomorphic multiplications. In this work, we address these problems and reduce the required block size to Ω(log4 N). We remove most of the homomorphic multiplications while maintaining O(1) communication complexity. Our idea is to replace their homomorphic eviction routine with a new, much cheaper permute-and-merge eviction which eliminates homomorphic multiplications and maintains the same level of security. In turn, this removes the need for layered encryption that Onion ORAM relies on and reduces both the minimum block size and server computation.
Limiting factors for maximum oxygen uptake and determinants of endurance performance.
In the exercising human, maximal oxygen uptake (VO2max) is limited by the ability of the cardiorespiratory system to deliver oxygen to the exercising muscles. This is shown by three major lines of evidence: 1) when oxygen delivery is altered (by blood doping, hypoxia, or beta-blockade), VO2max changes accordingly; 2) the increase in VO2max with training results primarily from an increase in maximal cardiac output (not an increase in the a-v O2 difference); and 3) when a small muscle mass is overperfused during exercise, it has an extremely high capacity for consuming oxygen. Thus, O2 delivery, not skeletal muscle O2 extraction, is viewed as the primary limiting factor for VO2max in exercising humans. Metabolic adaptations in skeletal muscle are, however, critical for improving submaximal endurance performance. Endurance training causes an increase in mitochondrial enzyme activities, which improves performance by enhancing fat oxidation and decreasing lactic acid accumulation at a given VO2. VO2max is an important variable that sets the upper limit for endurance performance (an athlete cannot operate above 100% VO2max, for extended periods). Running economy and fractional utilization of VO2max also affect endurance performance. The speed at lactate threshold (LT) integrates all three of these variables and is the best physiological predictor of distance running performance.
eQTL Mapping Using RNA-seq Data.
As RNA-seq is replacing gene expression microarrays to assess genome-wide transcription abundance, gene expression Quantitative Trait Locus (eQTL) studies using RNA-seq have emerged. RNA-seq delivers two novel features that are important for eQTL studies. First, it provides information on allele-specific expression (ASE), which is not available from gene expression microarrays. Second, it generates unprecedentedly rich data to study RNA-isoform expression. In this paper, we review current methods for eQTL mapping using ASE and discuss some future directions. We also review existing works that use RNA-seq data to study RNA-isoform expression and we discuss the gaps between these works and isoform-specific eQTL mapping.
Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition
Recently, the connectionist temporal classification (CTC) model coupled with recurrent (RNN) or convolutional neural networks (CNN), made it easier to train speech recognition systems in an end-to-end fashion. However in real-valued models, time frame components such as mel-filter-bank energies and the cepstral coefficients obtained from them, together with their first and second order derivatives, are processed as individual elements, while a natural alternative is to process such components as composed entities. We propose to group such elements in the form of quaternions and to process these quaternions using the established quaternion algebra. Quaternion numbers and quaternion neural networks have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies, and to solve many tasks with less learning parameters than real-valued models. This paper proposes to integrate multiple feature views in quaternion-valued convolutional neural network (QCNN), to be used for sequence-to-sequence mapping with the CTC model. Promising results are reported using simple QCNNs in phoneme recognition experiments with the TIMIT corpus. More precisely, QCNNs obtain a lower phoneme error rate (PER) with less learning parameters than a competing model based on real-valued CNNs.
The effects of Social Networking Site (SNS) use on college students' friendship and well-being
The present study was conducted with two goals in mind: (1) to examine the influence of using different types of SNS use on users’ well-being, and (2) to examine the mediating roles of online self-disclosure and friendship quality in the relationship between types of SNS use and well-being. Participants were from two large 4-year undergraduate universities in Southwestern China. The study was conducted during Spring semester, 2013, using advertisements that described the nature of the research and indicated that compensation for participation was U10 (about $1.5 U.S.). Of the 402 students approached, 337 completed the survey (i.e., response rate was 83.83%). Structural equation modeling showed that ‘‘social’’ type SNS use was positively related to users’ well-being, whereas ‘‘entertainment’’ type SNS use was not. In addition, online self-disclosure was a significant predictor of users’ friendship quality. However, there was an inverse relationship between ‘‘social’’ SNS use and online self-disclosure, and no relationship between friendship quality based on SNS use and well-being. It should be noted that generalizations of our findings should be made cautiously. The cross-section design and self-reported usage of SNS would also be limitations. Experimental and longitudinal studies should be conducted to provide stronger evidence of causal relations among variables examined in this study. 2014 Elsevier Ltd. All rights reserved.
Multilingual Open Information Extraction
Open Information Extraction (OIE) is a recent unsupervised strategy to extract great amounts of basic propositions (verb-based triples) from massive text corpora which scales to Web-size document collections. We propose a multilingual rule-based OIE method that takes as input dependency parses in the CoNLL-X format, identifies argument structures within the dependency parses, and extracts a set of basic propositions from each argument structure. Our method requires no training data and, according to experimental studies, obtains higher recall and higher precision than existing approaches relying on training data. Experiments were performed in three languages: English, Portuguese, and Spanish.