title
stringlengths
8
300
abstract
stringlengths
0
10k
Review of mass spectrometry-based metabolomics in cancer research.
Metabolomics, the systematic investigation of all metabolites present within a biologic system, is used in biomarker development for many human diseases, including cancer. In this review, we investigate the current role of mass spectrometry-based metabolomics in cancer research. A literature review was carried out within the databases PubMed, Embase, and Web of Knowledge. We included 106 studies reporting on 21 different types of cancer in 7 different sample types. Metabolomics in cancer research is most often used for case-control comparisons. Secondary applications include translational areas, such as patient prognosis, therapy control and tumor classification, or grading. Metabolomics is at a developmental stage with respect to epidemiology, with the majority of studies including less than 100 patients. Standardization is required especially concerning sample preparation and data analysis. In the second part of this review, we reconstructed a metabolic network of patients with cancer by quantitatively extracting all reports of altered metabolites: Alterations in energy metabolism, membrane, and fatty acid synthesis emerged, with tryptophan levels changed most frequently in various cancers. Metabolomics has the potential to evolve into a standard tool for future applications in epidemiology and translational cancer research, but further, large-scale studies including prospective validation are needed.
The Impact of Societal Cultural Values and Individual Social Beliefs on the Perceived Effectiveness of Managerial Influence Strategies: A Meso Approach
Chinese University of Hong Kong, Singapore; Nanyang Business School, Singapore; Loyola University Chicago, USA; State University at Albany, USA; I-Shou University, Taiwan; Xavier Labour Relations Institute, Jamshedpur, India; New Mexico State University, USA; Vrije Universiteit Amsterdam, the Netherlands; Bogazici University Istanbul, Turkey; Lacassagne Université de Bourgogne, France; Waseda University, Japan; Sasin Graduate Institute of Business Administration of Chulalongkorn University, Thailand
PVC removal from mixed plastics by triboelectrostatic separation.
Ever increasing oil price and the constant growth in generation of waste plastics stimulate a research on material separation for recycling of waste plastics. At present, most waste plastics cause serious environmental problems due to the disposal by reclamation and incineration. Particularly, polyvinyl chloride (PVC) materials among waste plastics generates hazardous HCl gas, dioxins containing Cl, and so on, which lead to air pollution and shorten the life of incinerator, and it makes difficultly recycling of other plastics. Therefore, we designed a bench scale triboelectrostatic separator for PVC removal from mixed plastics (polyvinyl chloride/polyethylene terephthalate), and then carried out material separation tests. In triboelectrostatic separation, PVC and PET particles are charged negatively and positively, respectively, due to the difference of the work function of plastics in tribo charger of the fluidized-bed, and are separated by means of splitter through an opposite electric field. In this study, the charge efficiency of PVC and PET was strongly dependent on the tribo charger material (polypropylene), relative humidity (below 30%), air velocity (over 10 m/s), and mixture ratio (PET:PVC=1:1). At the optimum conditions (electrode potential of 20 kV and splitter position of -2 cm), PVC rejection and PET recovery in PET products were 99.60 and 98.10%, respectively, and the reproducibility of optimal test was very good (+/-1%). In addition, as a change of splitter position, we developed the technique to recover high purity PET (over 99.99%) although PET recovery decreases by degrees.
Transport Layer Security
Transport Layer Security is the standard, widely deployed protocol for securing client-server communications over the Internet. TLS is designed to prevent eavesdropping, tampering, and message forgery for client-server applications. Here, the author looks at the collection of standards that make up TLS, including its history, protocol, and future.
INITIAL DEGRADATION OF INDUSTRIAL SILICON SOLAR CELLS IN SOLAR PANELS
In the lifetime of a solar panel, efficiency is degrading continually because panel components are ageing during outdoor exposure (OE). This degradation is m ainly due to humidity, temperature, system bias eff ects and solar irradiation. The solar cell itself may be suffering different degradation mechanisms like light, tempe rature and potential induced degradation (LID, TID and PID [1]). The foc us of this paper is the initial degradation of sola r cells within the first hours of operation, which is generally associ ated with LID. In this work, differences in degrada tion mechanism for multi and mono cells are investigated for cells and panels based on p-type crystalline silicon. The qu ality of silicon material is essential as shown in a material compar ison. Low and high base resistivities are investiga ted and different silicon purities ranging from standard feedstock ma teri l of different qualities to material based on upgraded metallurgical (UMG) silicon are compared. Interesti ngly no clear difference between the LID of industr ial multiand mono-crystalline cells was found. However, UMG cell s show a higher degradation rate partly due to a me chanism identified as temperature induced degradation (TID) that occurs in parallel to LID.
Youth Top Problems: using idiographic, consumer-guided assessment to identify treatment needs and to track change during psychotherapy.
OBJECTIVE To complement standardized measurement of symptoms, we developed and tested an efficient strategy for identifying (before treatment) and repeatedly assessing (during treatment) the problems identified as most important by caregivers and youths in psychotherapy. METHOD A total of 178 outpatient-referred youths, 7-13 years of age, and their caregivers separately identified the 3 problems of greatest concern to them at pretreatment and then rated the severity of those problems weekly during treatment. The Top Problems measure thus formed was evaluated for (a) whether it added to the information obtained through empirically derived standardized measures (e.g., the Child Behavior Checklist [CBCL; Achenbach & Rescorla, 2001] and the Youth Self-Report [YSR; Achenbach & Rescorla, 2001]) and (b) whether it met conventional psychometric standards. RESULTS The problems identified were significant and clinically relevant; most matched CBCL/YSR items while adding specificity. The top problems also complemented the information yield of the CBCL/YSR; for example, for 41% of caregivers and 79% of youths, the identified top problems did not correspond to any items of any narrowband scales in the clinical range. Evidence on test-retest reliability, convergent and discriminant validity, sensitivity to change, slope reliability, and the association of Top Problems slopes with standardized measure slopes supported the psychometric strength of the measure. CONCLUSIONS The Top Problems measure appears to be a psychometrically sound, client-guided approach that complements empirically derived standardized assessment; the approach can help focus attention and treatment planning on the problems that youths and caregivers consider most important and can generate evidence on trajectories of change in those problems during treatment.
Point-of-Interest Recommendation Using Heterogeneous Link Prediction
Venue recommendation in location-based social networks is among the more important tasks that enhances user participation on the social network. Despite its importance, earlier research have shown that the accurate recommendation of appropriate venues for users is a difficult task specially given the highly sparse nature of user check-in information. In this paper, we show how a comprehensive set of user and venue related information can be methodically incorporated into a heterogeneous graph representation based on which the problem of venue recommendation can be efficiently formulated as an instance of the heterogeneous link prediction problem on the graph. We systematically compare our proposed approach with several strong baselines and show that our work, which is computationally less-intensive compared to the baselines, is able to shows improved performance in terms of precision and f-measure.
Toward unified DevOps model
DevOps community advocates collaboration between development and operations staff during software deployment. However this collaboration may cause a conceptual deficit. This paper proposes a Unified DevOps Model (UDOM) in order to overcome the conceptual deficit. Firstly, the origin of conceptual deficit is discussed. Secondly, UDOM model is introduced that includes three sub-models: application and data model, workflow execution model and infrastructure model. UDOM model can help to scale down deployment time, mitigate risk, satisfy customer requirements, and improve productivity. Finally, this paper can be a roadmap for standardization DevOps terminologies, concepts, patterns, cultures, and tools.
Minorías étnico-culturales y derechos de las Mujeres * 1 Ethno-Cultural Minorities and Women's Rights
This article examines the foundation of a conciliatory proposal that balances the claims of equality feminism with a notion of an inclusive citizenship that consider the multicultural jurisdictions approach. We first review how equality feminism and multiculturalism agree on a critique of a traditional liberal citizenship notion. Second, we explore the possibility of building a conception that allows overcoming that ideal of citizenship. Finally, it analyses critically the thesis of the multicultural jurisdictions for conclude that it alone is not enough and it must be inserted into an adequate conception of deliberative democracy.
Effects of mood on the speed of conscious perception: behavioural and electrophysiological evidence.
When a visual stimulus is quickly followed in time by a second visual stimulus, we are normally unable to perceive it consciously. This study examined how affective states influence this temporal limit of conscious perception. Using a masked visual perception task, we found that the temporal threshold for access to consciousness is decreased in negative mood and increased in positive mood. To identify the brain mechanisms associated with this effect, we analysed brain oscillations. The mood-induced differences in perception performance were associated with differences in ongoing alpha power (around 10 Hz) before stimulus presentation. Additionally, after stimulus presentation, the better performance during negative mood was associated with enhanced global coordination of neuronal activity of theta oscillations (around 5 Hz). Thus, the effect of mood on the speed of conscious perception seems to depend on changes in oscillatory brain activity, rendering the cognitive system more or less sensitive to incoming stimuli.
Downstream processing of stevioside and its potential applications.
Stevioside is a natural sweetener extracted from leaves of Stevia rebaudiana Bertoni, which is commercially produced by conventional (chemical/physical) processes. This article gives an overview of the stevioside structure, various analysis technique, new technologies required and the advances achieved in recent years. An enzymatic process is established, by which the maximum efficacy and benefit of the process can be achieved. The efficiency of the enzymatic process is quite comparable to that of other physical and chemical methods. Finally, we believe that in the future, the enzyme-based extraction will ensure more cost-effective availability of stevioside, thus assisting in the development of more food-based applications.
Neural Attentional Rating Regression with Review-level Explanations
Reviews information is dominant for users to make online purchasing decisions in e-commerces. However, the usefulness of reviews is varied. We argue that less-useful reviews hurt model’s performance, and are also less meaningful for user’s reference. While some existing models utilize reviews for improving the performance of recommender systems, few of them consider the usefulness of reviews for recommendation quality. In this paper, we introduce a novel attentionmechanism to explore the usefulness of reviews, and propose a Neural Attentional Regression model with Review-level Explanations (NARRE) for recommendation. Speci cally, NARRE can not only predict precise ratings, but also learn the usefulness of each review simultaneously. Therefore, the highly-useful reviews are obtained which provide review-level explanations to help users make better and faster decisions. Extensive experiments on benchmark datasets of Amazon and Yelp on di erent domains show that the proposed NARRE model consistently outperforms the state-ofthe-art recommendation approaches, including PMF, NMF, SVD++, HFT, and DeepCoNN in terms of rating prediction, by the proposed attention model that takes review usefulness into consideration. Furthermore, the selected reviews are shown to be e ective when taking existing review-usefulness ratings in the system as ground truth. Besides, crowd-sourcing based evaluations reveal that in most cases, NARRE achieves equal or even better performances than system’s usefulness rating method in selecting reviews. And it is exible to o er great help on the dominant cases in real ecommerce scenarios when the ratings on review-usefulness are not available in the system.
Attend to You: Personalized Image Captioning with Context Sequence Memory Networks
We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the users active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.
Using TMS to study the role of the articulatory motor system in speech perception
Background: The ability to communicate using speech is a remarkable skill, which requires precise coordination of articulatory movements and decoding of complex acoustic signals. According to the traditional view, speech production and perception rely on motor and auditory brain areas, respectively. However, there is growing evidence that auditory-motor circuits support both speech production and perception.Aims: In this article we provide a review of how transcranial magnetic stimulation (TMS) has been used to investigate the excitability of the motor system during listening to speech and the contribution of the motor system to performance in various speech perception tasks. We also discuss how TMS can be used in combination with brain-imaging techniques to study interactions between motor and auditory systems during speech perception.Main contribution: TMS has proven to be a powerful tool to investigate the role of the articulatory motor system in speech perception.Conclusions: TMS studies have provided support for the view that the motor structures that control the movements of the articulators contribute not only to speech production but also to speech perception.
Arthroscopic pancapsular plication for multidirectional shoulder instability in overhead athletes.
Treating shoulder multidirectional instability with an open stabilization procedure has been reported to have good results. However, few studies exist of arthroscopic plication, especially in overhead athletes. The purpose of this study was to evaluate the clinical outcomes of arthroscopic pancapsular plication for multidirectional instability in overhead athletes.Twenty-three athletes with symptomatic multidirectional instability were treated with arthroscopic pancapsular plication and evaluated at a mean follow-up of 36.3 months (range, 24-61 months). Mean patient age was 23.3 years (range, 19-33 years). Functional outcomes were evaluated with the American Shoulder and Elbow Surgeons (ASES) score, Constant shoulder score, and Rowe instability score. The degree of pain and range of motion were also recorded. All postoperative functional scores were rated good to excellent, with an average ASES score of 88.4 (range, 82-95), average Constant shoulder score of 88.1 (range, 81-100), and average Rowe instability score of 86.7 (range, 80-100). Five patients returned to the same level of competitive sports, and 18 returned to a limited level. All patients were satisfied with the stability postoperatively. No significant change was observed in postoperative range of motion, but patients who returned to a limited level of sports had lower functional scores and more pain than did those who fully returned to sports.Arthroscopic pancapsular plication for treating multidirectional instability in overhead athletes can provide good stability. However, the low rate of return to a full level of overhead sports is a problem. Further evaluation of the benefits of this procedure for overhead athletes with symptomatic multidirectional instability is needed.
Learning to Write with Cooperative Discriminators
Despite their local fluency, long-form text generated from RNNs is often generic, repetitive, and even self-contradictory. We propose a unified learning framework that collectively addresses all the above issues by composing a committee of discriminators that can guide a base RNN generator towards more globally coherent generations. More concretely, discriminators each specialize in a different principle of communication, such as Grice’s maxims, and are collectively combined with the base RNN generator through a composite decoding objective. Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations.
Question Answering as Global Reasoning Over Semantic Abstractions
We propose a novel method for exploiting the semantic structure of text to answer multiple-choice questions. The approach is especially suitable for domains that require reasoning over a diverse set of linguistic constructs but have limited training data. To address these challenges, we present the first system, to the best of our knowledge, that reasons over a wide range of semantic abstractions of the text, which are derived using off-the-shelf, general-purpose, pre-trained natural language modules such as semantic role labelers, coreference resolvers, and dependency parsers. Representing multiple abstractions as a family of graphs, we translate question answering (QA) into a search for an optimal subgraph that satisfies certain global and local properties. This formulation generalizes several prior structured QA systems. Our system, SEMANTICILP, demonstrates strong performance on two domains simultaneously. In particular, on a collection of challenging science QA datasets, it outperforms various state-ofthe-art approaches, including neural models, broad coverage information retrieval, and specialized techniques using structured knowledge bases, by 2%-6%.
Integrated Communication and Control Systems: Part I—Analysis
Computer networking is a reliable and efficient means for communications between disparate and distributed components in complex dynamical processes like advanced aircraft, spacecraft, and autonomous manufacturing plants. The role of Integrated Communication and Control Systems (ICCS) is to coordinate and perform interrelated functions, ranging from real-time multi-loop control to information display and routine maintenance support. In ICCS, a feedback control loop is closed via the common communication channel which multiplexes digital data from the sensor to the controller and from the controller to the actuator along with the data traffic from other loops and management functions. Due to the asynchronous timedivision multiplexing of the network protocol, time-varying and possibly stochastic delays are introduced in the control system, which degrade the system dynamic performance and are a source of potential instability. The paper is divided into two parts. In the first part, the delayed control system is represented by a finitedimensional, time-varying, discrete-time model which is less complex than the existing continuous-time models for time-varying delays; this approach allows for simpler schemes for analysis and simulation of ICCS. The second part of the paper addresses ICCS design considerations and presents simulation results for certain operational scenarios of ICCS.
Development of 24 GHz rectennas for Fixed Wireless Access
We need electricity to use wireless information. If we reduce amount of batteries or electrical wires with a wireless power transmission technology via microwave (MPT), it is a green communication system. We Kyoto University propose a Fixed Wireless Access (FWA) system with the MPT with NTT, Japan. In this paper, we show mainly development results of 24GHz rectennas, rectifying antenna, for FWA. We developed some types of the rectennas. Finally we achieve 65% of RF-DC conversion efficiency with output filter of harmonic balance.
Cloud Computing: Pros and Cons for Computer Forensic Investigations
Cloud computing is a relatively new concept that offers the potential to deliver scalable elastic services to many. The notion of pay-per use is attractive and in the current global recession hit economy it offers an economic solution to an organizations’ IT needs. Computer forensics is a relatively new discipline born out of the increasing use of computing and digital storage devices in criminal acts (both traditional and hi-tech). Computer forensic practices have been around for several decades and early applications of their use can be charted back to law enforcement and military investigations some 30 years ago. In the last decade computer forensics has developed in terms of procedures, practices and tool support to serve the law enforcement community. However, it now faces possibly its greatest challenges in dealing with cloud computing. Through this paper we explore these challenges and suggest some possible solutions.
From strategy to action: how top managers' support increases middle managers' commitment to innovation implementation in health care organizations.
BACKGROUND Evidence suggests that top managers' support influences middle managers' commitment to innovation implementation. What remains unclear is how top managers' support influences middle managers' commitment. Results may be used to improve dismal rates of innovation implementation. METHODS We used a mixed-method sequential design. We surveyed (n = 120) and interviewed (n = 16) middle managers implementing an innovation intended to reduce health disparities in 120 U.S. health centers to assess whether top managers' support directly influences middle managers' commitment; by allocating implementation policies and practices; or by moderating the influence of implementation policies and practices on middle managers' commitment. For quantitative analyses, multivariable regression assessed direct and moderated effects; a mediation model assessed mediating effects. We used template analysis to assess qualitative data. FINDINGS We found support for each hypothesized relationship: Results suggest that top managers increase middle managers' commitment by directly conveying to middle managers that innovation implementation is an organizational priority (β = 0.37, p = .09); allocating implementation policies and practices including performance reviews, human resources, training, and funding (bootstrapped estimate for performance reviews = 0.09; 95% confidence interval [0.03, 0.17]); and encouraging middle managers to leverage performance reviews and human resources to achieve innovation implementation. PRACTICE IMPLICATIONS Top managers can demonstrate their support directly by conveying to middle managers that an initiative is an organizational priority, allocating implementation policies and practices such as human resources and funding to facilitate innovation implementation, and convincing middle managers that innovation implementation is possible using available implementation policies and practices. Middle managers may maximize the influence of top managers' support on their commitment by communicating with top managers about what kind of support would be most effective in increasing their commitment to innovation implementation.
Standardization and Discretion: Does the Environmental Standard ISO 14001 Lead to Performance Benefits?
This study sought to determine if the environmental management standard ISO 14001 helps organizations reduce the negative impact their business activities may have on the environment, and as a result, also improves their business performance. Forty organizations participated in the study and described how they implement ISO 14001 requirements. They also reported how the standard impacts on their environmental and business performance. The results show that if ISO 14001 requirements become part of the organization's daily practices, then standardization of the organization's handling of environmental issues follows-leading, consequently, to better organizational environmental performance. In addition, standardization augments its effect on organizational environmental performance through its positive impact on employee discretion. Allowing employees discretion further improves environmental performance. We saw that discretion partially mediates the effect of standardization on environmental performance. Analysis of survey and financial data did not reveal any support for the hypothesis that achieving improvement in environmental performance as result of ISO 14001 implementation leads to better business performance; on the other hand, we saw that business performance was not harmed
A recommender system using GA K-means clustering in an online shopping market
The Internet is emerging as a new marketing channel, so understanding the characteristics of online customers’ needs and expectations is considered a prerequisite for activating the consumer-oriented electronic commerce market. In this study, we propose a novel clustering algorithm based on genetic algorithms (GAs) to effectively segment the online shopping market. In general, GAs are believed to be effective on NP-complete global optimization problems, and they can provide good near-optimal solutions in reasonable time. Thus, we believe that a clustering technique with GA can provide a way of finding the relevant clusters more effectively. The research in this paper applied K-means clustering whose initial seeds are optimized by GA, which is called GA K-means, to a real-world online shopping market segmentation case. In this study, we compared the results of GA K-means to those of a simple K-means algorithm and self-organizing maps (SOM). The results showed that GA K-means clustering may improve segmentation performance in comparison to other typical clustering algorithms. In addition, our study validated the usefulness of the proposed model as a preprocessing tool for recommendation systems. 2007 Elsevier Ltd. All rights reserved.
Automated criminal link analysis based on domain knowledge
Link (association) analysis has been used in the criminal justice domain to search large datasets for associations between crime entities in order to facilitate crime investigations. However, link analysis still faces many challenging problems, such as information overload, high search complexity, and heavy reliance on domain knowledge. To address these challenges, this article proposes several techniques for automated, effective, and efficient link analysis. These techniques include the co-occurrence analysis, the shortest path algorithm, and a heuristic approach to identifying associations and determining their importance. We developed a prototype system called CrimeLink Explorer based on the proposed techniques. Results of a user study with 10 crime investigators from the Tucson Police Department showed that our system could help subjects conduct link analysis more efficiently than traditional single-level link analysis tools. Moreover, subjects believed that association paths found based on the heuristic approach were more accurate than those found based solely on the co-occurrence analysis and that the automated link analysis system would be of great help in crime investigations.
Food image recognition using deep convolutional network with pre-training and fine-tuning
In this paper, we examined the effectiveness of deep convolutional neural network (DCNN) for food photo recognition task. Food recognition is a kind of fine-grained visual recognition which is relatively harder problem than conventional image recognition. To tackle this problem, we sought the best combination of DCNN-related techniques such as pre-training with the large-scale ImageNet data, fine-tuning and activation features extracted from the pre-trained DCNN. From the experiments, we concluded the fine-tuned DCNN which was pre-trained with 2000 categories in the ImageNet including 1000 food-related categories was the best method, which achieved 78.77% as the top-1 accuracy for UEC-FOOD100 and 67.57% for UEC-FOOD256, both of which were the best results so far. In addition, we applied the food classifier employing the best combination of the DCNN techniques to Twitter photo data. We have achieved the great improvements on food photo mining in terms of both the number of food photos and accuracy. In addition to its high classification accuracy, we found that DCNN was very suitable for large-scale image data, since it takes only 0.03 seconds to classify one food photo with GPU.
Learning to rank audience for behavioral targeting in display ads
Behavioral targeting (BT), which aims to sell advertisers those behaviorally related user segments to deliver their advertisements, is facing a bottleneck in serving the rapid growth of long tail advertisers. Due to the small business nature of the tail advertisers, they generally expect to accurately reach a small group of audience, which is hard to be satisfied by classical BT solutions with large size user segments. In this paper, we propose a novel probabilistic generative model named Rank Latent Dirichlet Allocation (RANKLDA) to rank audience according to their ads click probabilities for the long tail advertisers to deliver their ads. Based on the basic assumption that users who clicked the same group of ads will have a higher probability of sharing similar latent search topical interests, RANKLDA combines topic discovery from users' search behaviors and learning to rank users from their ads click behaviors together. In computation, the topic learning could be enhanced by the supervised information of the rank learning and simultaneously, the rank learning could be better optimized by considering the discovered topics as features. This co-optimization scheme enhances each other iteratively. Experiments over the real click-through log of display ads in a public ad network show that the proposed RANKLDA model can effectively rank the audience for the tail advertisers.
NOMA: An Information Theoretic Perspective
In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated from an informationtheoretic perspective. The relationships among the capacity r egion of broadcast channels and two rate regions achieved by NOMA and time-division multiple access (TDMA) are illustrated first. Then, the performance of NOMA is evaluated by considering TDMA as the benchmark, where both the sum rate and the individual use r rates are used as the criteria. In a wireless downlink scenar io with user pairing, the developed analytical results show that NOMA can outperform TDMA not only for the sum rate but also for each user’s individual rate, particularly when the difference between the users’ channels is large. I. I NTRODUCTION Because of its superior spectral efficiency, non-orthogona l multiple access (NOMA) has been recognized as a promising technique to be used in the fifth generation (5G) networks [1] – [4]. NOMA utilizes the power domain for achieving multiple access, i.e., different users are served at different power levels. Unlike conventional orthogonal MA, such as timedivision multiple access (TDMA), NOMA faces strong cochannel interference between different users, and success ive interference cancellation (SIC) is used by the NOMA users with better channel conditions for interference managemen t. The concept of NOMA is essentially a special case of superposition coding developed for broadcast channels (BC ). Cover first found the capacity region of a degraded discrete memoryless BC by using superposition coding [5]. Then, the capacity region of the Gaussian BC with single-antenna terminals was established in [6]. Moreover, the capacity re gion of the multiple-input multiple-output (MIMO) Gaussian BC was found in [7], by applying dirty paper coding (DPC) instea d of superposition coding. This paper mainly focuses on the single-antenna scenario. Specifically, consider a Gaussian BC with a single-antenna transmitter and two single-antenna receivers, where each r eceiver is corrupted by additive Gaussian noise with unit var iance. Denote the ordered channel gains from the transmitter to the two receivers byhw andhb, i.e., |hw| < |hb|. For a given channel pair(hw, hb), the capacity region is given by [6] C , ⋃ a1+a2=1, a1, a2 ≥ 0 { (R1, R2) : R1, R2 ≥ 0, R1≤ log2 ( 1+ a1x 1+a2x ) , R2≤ log2 (1+a2y) }
Black-out test vs. UV camera for corona inspection of HV motor stator endwindings
As part of a factory acceptance test program for high voltage motors, end users sometimes specify a black-out test. This is a traditional offline inspection where the stator is placed in complete darkness, each phase is energized to 115% of rated line-to-neutral voltage, and both ends of the stator are observed to determine the presence, location and severity of endwinding surface partial discharges (PD). The setup for the test may be complex and risky due to the need for observers to be standing in darkness close to the energized parts of the stator. The test results are qualitative and strongly depend on the observer's eyesight and individual perception. A safer and more accurate alternative is to use an ultraviolet (UV) camera or viewer. The observed PD activity may be observed in ambient lighting, recorded, and quantified through simultaneous offline PD measurements. This paper describes the two inspection techniques, and presents experimental validation of the UV corona camera inspection method as a suitable replacement for a black-out test. Sample 13.8-kV coils were wound in a fixture simulating their relationship in a stator winding, and subjected to high-potential tests while observed under black-out conditions and with a UV corona camera in ambient lighting. The stator windings from two high voltage compressor motors were inspected using the same camera. Recorded images of the observed discharges and measured PD activity in the sample coils and stator winding were used to compare the evaluation by each test method.
Endogenous kappa-opioid mediation of stress-induced potentiation of ethanol-conditioned place preference and self-administration
Exposure to inescapable stressors increases both the rewarding properties and self-administration of cocaine through the signaling of the kappa-opioid receptor (KOR), but the effect of this signaling on other reinforcing agents remains unclear. The objective of this study is to test the hypothesis that signaling of the KOR mediates the forced swim stress (FSS)-induced potentiation of ethanol reward and self-administration. Male C57Bl/6J mice were tested in a biased ethanol-conditioned place preference (CPP) procedure, and both C57Bl/6J and prodynorphin gene-disrupted (Dyn −/−) mice were used in two-bottle free choice (TBC) assays, with or without exposure to FSS. To determine the role of the KOR in the resulting behaviors, the KOR agonist U50,488 (10 mg/kg) and antagonist nor-binaltorphimine (nor-BNI, 10 mg/kg) were administered prior to parallel testing. C57Bl/6J mice exposed to repeated FSS 5 min prior to daily place conditioning with ethanol (0.8 g/kg) demonstrated a 4.4-fold potentiation of ethanol-CPP compared to unstressed mice that was prevented by nor-BNI pretreatment. Likewise, pretreatment with U50,488 90 min prior to daily ethanol place conditioning resulted in a 2.8-fold potentiation of ethanol-CPP. In the TBC assay, exposure to FSS significantly increased the consumption of 10% (v/v) ethanol by 19.3% in a nor-BNI-sensitive manner. Notably, Dyn −/− mice consumed a similar volume of ethanol as wild-type littermates and C57Bl/6J mice, but did not demonstrate significant stress-induced increases in consumption. These data demonstrated a stress-induced potentiation of the rewarding effects and self-administration of ethanol mediated by KOR signaling.
Automatic Adaptive Center of Pupil Detection Using Face Detection and CDF Analysis
—This paper presents a novel adaptive algorithm to detect the center of pupil in frontal view faces. This algorithm, at first, employs the viola-Jones face detector to find the approximate location of face in an image. The knowledge of the face structure is exploited to detect the eye region. The histogram of the detected region is calculated and its CDF is employed to extract the eyelids and iris region in an adaptive way. The center of this region is considered as the pupil center. The experimental results show ninety one percent's accuracy in detecting pupil center.
Startup and Fault Tolerance of the SRM Drive with Three-Phase Bridge Inverter
When a three-phase SRM (switched reluctance motor) is driven by a three-phase bridge inverter, the phases of SRM should be connected in the same way that in the case of the induction motor, in other words, in delta or stars. When the phase windings are connected in star, the star's center it should be connected to the mid point of the capacitors of the DC bus. To maintain the voltage of the capacitors balanced during the operation of SRM is necessary to use a strategy of energy transfer of the capacitor with more charge for than has less charge. The voltage balance of the DC bus capacitors is obtained with the change of the phase currents direction during the machine operation. This strategy doesn't prevent that in the startup one of the DC bus capacitors is totally discharge. The short circuit of one of the capacitors for a relay during the startup solves the problem of discharge of the capacitor, it allows to operate SRM in very low speed, but it imposes an equivalent performance of an R-Dump inverter, where the discharge resistance of the energy of the turned off phase is the own coil resistance of the phase. In this paper, a SRM startup technique is presented to work with a three phase bridge inverter and machine windings connected in star, without startup relay. The technique bases on allows two phases to conduct at the same time during startup. A comparison of the system proposed with the split and asymmetric half bridge inverters are also presented, analyzing the fault tolerance in the inverter. Experimental results demonstrate the validity of the proposed technique
Azelnidipine and amlodipine: a comparison of their pharmacokinetics and effects on ambulatory blood pressure.
We objected: 1) To compare the effects of azelnidipine and amlodipine on 24-h blood pressure; 2) To monitor the plasma concentration vs. the time profile in order to assess the association between pharmacokinetics and hypotensive activity after administration of either drug for 6 weeks. Blood pressure and pulse rate were measured by 24-h monitoring with a portable automatic monitor in a randomized double-blind study of 46 patients with essential hypertension. Azelnidipine 16 mg (23 patients) or amlodipine 5 mg (23 patients) was administered once daily for 6 weeks. Pharmacokinetics were analyzed after the last dose was taken. Both drugs showed similar effects on the office blood pressure and pulse rate. During 24-h monitoring, both drugs caused a decrease in systolic blood pressure of 13 mmHg and had a similar hypotensive profile during the daytime period (07:00-21:30). The pulse rate decreased by 2 beats/min in the azelnidipine group, whereas it significantly increased by 4 beats/min in the amlodipine group. Similar trends in the blood pressure and pulse rate were observed during the nighttime (22:00-6:30) and over 24 h. Excessive blood pressure reduction during the nighttime was not seen in either group. The pharmacokinetic results indicated that the plasma half-life (t1/2) of amlodipine was 38.5 +/- 19.8 h and that of azelnidipine was 8.68 +/- 1.33 h. Despite this difference in pharmacokinetics, the hypotensive effects of amlodipine and azelnidipine were similar throughout the 24-h administration period.
THE (A)POLITICAL ECONOMY OF BITCOIN
The still raging financial crisis of 2007–2008 has enabled the emergence of several alternative practices concerning the production, circulation, and use of money. This essay explores the political economy of the Bitcoin ecosystem. Specifically, we examine the context in which this digital currency is emerging as well as its nature, dynamics, advantages, and disadvantages. We conclude that Bitcoin, a truly interesting experiment, exemplifies “distributed capitalism” and should be mostly seen as a technological innovation. Rather than providing pragmatic answers and solutions to the current views on the financial crisis, Bitcoin provides some useful and timely questions about the principles and bases of the dominant political economy.
GROMACS 3 . 0 : a package for molecular simulation and trajectory analysis
is the latest release of a versatile and very well optimized package for molecular simulation. Much effort has been devoted to achieving extremely high performance on both workstations and parallel computers. The design includes an extraction of vi-rial and periodic boundary conditions from the loops over pairwise interactions, and special software routines to enable rapid calculation of x –1/2. Inner loops are generated automatically in C or Fortran at compile time, with optimizations adapted to each architecture. Assembly loops using SSE and 3DNow! Multimedia instructions are provided for x86 processors, resulting in exceptional performance on inexpensive PC workstations. The interface is simple and easy to use (no scripting language), based on standard command line arguments with self-explanatory functionality and integrated documentation. All binary files are independent of hardware endian and can be read by versions of GROMACS compiled using different floating-point precision. A large collection of flexible tools for trajectory analysis is included, with output in the form of finished Xmgr/Grace graphs. A basic trajectory viewer is included, and several external visualization tools can read the GROMACS trajectory format. Starting with version 3.0, GROMACS is available under the GNU General Public License from
Phase I trial of split-dose induction docetaxel, cisplatin, and 5-fluorouracil (TPF) chemotherapy followed by curative surgery combined with postoperative radiotherapy in patients with locally advanced oral and oropharyngeal squamous cell cancer (TISOC-1)
Induction chemotherapy (ICT) with docetaxel, cisplatin and fluorouracil (TPF) followed by radiotherapy is an effective treatment option for unresectable locally advanced head and neck cancer. This phase I study was designed to investigate the safety and tolerability of a split-dose TPF ICT regimen prior to surgery for locally advanced resectable oral and oropharyngeal cancer. Patients received TPF split on two dosages on day 1 and 8 per cycle for one or three 3-week cycles prior to surgery and postoperative radiotherapy or radiochemotherapy. Docetaxel was escalated in two dose levels, 40 mg/m2 (DL 0) and 30 mg/m2 (DL −1), plus 40 mg/m2 cisplatin and 2000 mg/m2 fluorouracil per week using a 3 +3 dose escalation algorithm. Eighteen patients were enrolled and were eligible for toxicity and response. A maximum tolerated dose of 30 mg/m2 docetaxel per week was reached. The most common grade 3+ adverse event was neutropenia during ICT in 10 patients. Surgery reached R0 resection in all cases. Nine patients (50%) showed complete pathologic regression. A split-dose regime of TPF prior to surgery is feasible, tolerated and merits additional investigation in a phase II study with a dose of 30 mg/m docetaxel per week. NCT01108042 (ClinicalTrials.gov Identifier)
Multitask Learning with Deep Neural Networks for Community Question Answering
In this paper, we developed a deep neural network (DNN) that learns to solve simultaneously the three tasks of the cQA challenge proposed by the SemEval-2016 Task 3, i.e., question-comment similarity, question-question similarity and new question-comment similarity. The latter is the main task, which can exploit the previous two for achieving better results. Our DNN is trained jointly on all the three cQA tasks and learns to encode questions and comments into a single vector representation shared across the multiple tasks. The results on the official challenge test set show that our approach produces higher accuracy and faster convergence rates than the individual neural networks. Additionally, our method, which does not use any manual feature engineering, approaches the state of the art established with methods that make heavy use of it.
Constrained K-means Clustering with Background Knowledge
Clustering is traditionally viewed as an unsupervised method for data analysis. However, in some cases information about the problem domain is available in addition to the data instances themselves. In this paper, we demonstrate how the popular k-means clustering algorithm can be profitably modified to make use of this information. In experiments with artificial constraints on six data sets, we observe improvements in clustering accuracy. We also apply this method to the real-world problem of automatically detecting road lanes from GPS data and observe dramatic increases in performance.
Motivating Individuals and Groups at Work: A Social Identity Perspective on Leadership and Group Performance
We argue that additional understanding of work motivation can be gained by incorporating current insights concerning self-categorization and social identity processes and by examining the way in which these processes influence the motivation and behavior of individuals and groups at work. This theoretical perspective that focuses on the conditions determining different self-definitions allows us to show how individual and group processes interact to determine work motivation. To illustrate the added value of this approach, we develop some specific propositions concerning motivational processes underpinning leadership and group performance.
Decontamination of 16S rRNA gene amplicon sequence datasets based on bacterial load assessment by qPCR
Identification of unexpected taxa in 16S rRNA surveys of low-density microbiota, diluted mock communities and cultures demonstrated that a variable fraction of sequence reads originated from exogenous DNA. The sources of these contaminants are reagents used in DNA extraction, PCR, and next-generation sequencing library preparation, and human (skin, oral and respiratory) microbiota from the investigators. For in silico removal of reagent contaminants, a pipeline was used which combines the relative abundance of operational taxonomic units (OTUs) in V3–4 16S rRNA gene amplicon datasets with bacterial DNA quantification based on qPCR targeting of the V3 segment of the 16S rRNA gene. Serially diluted cultures of Escherichia coli and Staphylococcus aureus were used for 16S rDNA profiling, and DNA from each of these species was used as a qPCR standard. OTUs assigned to Escherichia or Staphylococcus were virtually unaffected by the decontamination procedure, whereas OTUs from Pseudomonas, which is a major reagent contaminant, were completely or nearly completely removed. The decontamination procedure also attenuated the trend of increase in OTU richness in serially diluted cultures. Removal of contaminant sequences derived from reagents based on use of qPCR data may improve taxonomic representation in samples with low DNA concentration. Using the described pipeline, OTUs derived from cross-contamination of negative extraction controls were not recognized as contaminants and not removed from the sample dataset.
Autoencoders with Variable Sized Latent Vector for Image Compression
Learning to compress images is an interesting and challenging task. Autoencoders have long been used to compress images into a code of small but fixed size. As different images need different sized code based on their complexity, we propose an autoencoder architecture with a variable sized latent vector. We propose an attention based model which attends over the image and summarizes it into a small code. This summarization is repeated many times depending on the complexity of the image, producing a new code each time to encode new information so as to get a better reconstruction. These small codes then form sub-units of the final code. Our approach is quality progressive and has flexible quality setting which are desirable properties in compression. We show that the proposed model shows better performance compared to JPEG.
SWordNet: Inferring semantically related words from software context
Code search is an integral part of software development and program comprehension. The difficulty of code search lies in the inability to guess the exact words used in the code. Therefore, it is crucial for keyword-based code search to expand queries with semantically related words, e.g., synonyms and abbreviations, to increase the search effectiveness. However, it is limited to rely on resources such as English dictionaries and WordNet to obtain semantically related words in software because many words that are semantically related in software are not semantically related in English. On the other hand, many words that are semantically related in English are not semantically related in software. This paper proposes a simple and general technique to automatically infer semantically related words (referred to as rPairs) in software by leveraging the context of words in comments and code. In addition, we propose a ranking algorithm on the rPair results and study cross-project rPairs on two sets of software with similar functionality, i.e., media browsers and operating systems. We achieve a reasonable accuracy in nine large and popular code bases written in C and Java. Our further evaluation against the state of art shows that our technique can achieve a higher precision and recall. In addition, the proposed ranking algorithm improves the rPair extraction accuracy by bringing correct rPairs to the top of the list. Our cross-project study successfully discovers overlapping rPairs among projects of similar functionality and finds that cross-project rPairs are more likely to be correct than project-specific rPairs. Since the cross-project rPairs are highly likely to be general for software of the same type, the discovered overlapping rPairs can benefit other projects of the same type that have not been analyzed.
Current status of research on optimum sizing of stand-alone hybrid solar – wind power generation systems
Solar and wind energy systems are omnipresent, freely available, environmental friendly, and they are considered as promising power generating sources due to their availability and topological advantages for local power generations. Hybrid solar–wind energy systems, uses two renewable energy sources, allow improving the system efficiency and power reliability and reduce the energy storage requirements for stand-alone applications. The hybrid solar–wind systems are becoming popular in remote area power generation applications due to advancements in renewable energy technologies and substantial rise in prices of petroleum products. This paper is to review the current state of the simulation, optimization and control technologies for the stand-alone hybrid solar–wind energy systems with battery storage. It is found that continued research and development effort in this area is still needed for improving the systems’ performance, establishing techniques for accurately predicting their output and reliably integrating them with other renewable or conventional power generation sources. 2009 Elsevier Ltd. All rights reserved.
Charles Cotton: New Zealand's most influential geomorphologist
Charles Cotton was New Zealand's foremost advocate for geomorphology. His publications were recognised nationally and internationally, informing educational curricula and captivating the wider public. His approach to landform study was strongly influenced by The Geographical Cycle espoused by William Morris Davis of Harvard University. For the first half of the 20th century, The Cycle constituted the dominant paradigm of landform studies, but it was ultimately severely criticised and abandoned as unrealistic. While Cotton lost credence among some academics for his reluctance to abandon The Cycle, his elegantly illustrated written work made a lasting contribution to many branches of earth science.
Learning Arguments and Supertypes of Semantic Relations Using Recursive Patterns
A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations. We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web. We evaluate the performance of our algorithm on multiple semantic relations expressed using “verb”, “noun”, and “verb prep” lexico-syntactic patterns. Humanbased evaluation shows that the accuracy of the harvested information is about 90%. We also compare our results with existing knowledge base to outline the similarities and differences of the granularity and diversity of the harvested knowledge.
High performance SOI RF switches for wireless applications
This paper describes 0.18um CMOS silicon-on-insulator (SOI) technology and design techniques for SOI RF switch designs for wireless applications. The measured results of SP4T (single pole four throw) and SP8T (single pole eight throw) switch reference designs are presented. It has been demonstrated that SOI RF switch performance, in terms of power handling, linearity, insertion loss and isolation, is very competitive with those utilizing GaAs pHEMT and silicon-on-sapphire (SOS) technologies, while maintaining a cost and manufacturing advantage.
Economic costs of firm-level information infrastructure failures Estimates from field studies in manufacturing supply chains
Purpose – This paper presents a method for estimating the macro-economic cost of a firm-level information system disruption within a supply chain. Design/methodology/approach – The authors combine field study estimates with a Leontief-based input-output model to estimate the macro-economic costs of a targeted internet outage that disrupts the supply chain. Findings – The authors find that supply chain vulnerability or resiliency to cyber disruptions is not necessarily dependent on the types of technology employed, but rather how the technology is used to enable supply chain processes and the type of attack experienced. The authors find that some supply chains like oil and gas could be significantly impacted by certain cyber disruptions. However, similar to other causes of supply chain disruptions such as labor disputes or natural disasters, the authors find that firms can be very resilient to cyber disruptions. Research limitations/implications – The validity of the approach is limited by the accuracy of parameters gathered through field studies and the resolution of government economic data. Practical implications – Managers should examine how information technology is used to enable their supply chain processes and develop capabilities that provide resilience to failures. Lean supply chains that focus on minimizing inventory may be more vulnerable to major information system failures unless they take special steps to build resilience. Originality/value – This paper provides a new approach to estimating economic vulnerability due to supply chain information failures.
Multidialectal Spanish acoustic modeling for speech recognition
During the last years, language resources for speech recognition have been collected for many languages and specifically, for global languages. One of the characteristics of global languages is their wide geographical dispersion, and consequently, their wide phonetic, lexical, and semantic dialectal variability. Even if the collected data is huge, it is difficult to represent dialectal variants accurately. This paper deals with multidialectal acoustic modeling for Spanish. The goal is to create a set of multidialectal acoustic models that represents the sounds of the Spanish language as spoken in Latin America and Spain. A comparative study of different methods for combining data between dialects is presented. The developed approaches are based on decision tree clustering algorithms. They differ on whether a multidialectal phone set is defined, and in the decision tree structure applied. Besides, a common overall phonetic transcription for all dialects is proposed. This transcription can be used in combination with all the proposed acoustic modeling approaches. Overall transcription combined with approaches based on defining a multidialectal phone set leads to a full dialect-independent recognizer, capable to recognize any dialect even with a total absence of training data from such dialect. Multidialectal systems are evaluated over data collected in five different countries: Spain, Colombia, Venezuela, Argentina and Mexico. The best results given by multidialectal systems show a relative improvement of 13% over the results obtained with monodialectal systems. Experiments with dialect-independent systems have been conducted to recognize speech from Chile, a dialect not seen in the training process. The recognition results obtained for this dialect are similar to the ones obtained for other dialects. 2008 Elsevier B.V. All rights reserved.
Towards printable robotics: Origami-inspired planar fabrication of three-dimensional mechanisms
This work presents a technique which allows the application of 2-D fabrication methods to build 3-D robotic systems. The ability to print robots introduces a fast and low-cost fabrication method to modern, real-world robotic applications. To this end, we employ laser-engraved origami patterns to build a new class of robotic systems for mobility and manipulation. Origami is suitable for printable robotics as it uses only a flat sheet as the base structure for building complicated functional shapes, which can be utilized as robot bodies. An arbitrarily complex folding pattern can be used to yield an array of functionalities, in the form of actuated hinges or active spring elements. For actuation, we use compact NiTi coil actuators placed on the body to move parts of the structure on-demand. We demonstrate, as a proof-of-concept case study, the end-to-end fabrication and assembly of a simple mobile robot that can undergo worm-like peristaltic locomotion.
Video on-demand streaming on the Internet — A survey
Video-on-demand (VoD) streaming applications have evolved into one of the most popular types of applications on the Internet over the past decade. A great amount of research has been conducted to address the high bandwidth and stringent delay requirements of VoD applications on the video servers and the Internet. In this paper, we provide a broad survey of existing schemes and classify them into four categories. We also discuss the trade-offs and examine representative schemes in each category. Finally, we highlight future directions.
On an Application of Dynamic Programming to the Synthesis of Logical Systems
In this paper we wish to initiate the study of the application of dynamic programming to the domain of problems arising in the synthesis of logical systems. In a number of fields one encounters the problem of converting a system in one state into another state in a most efficient fashion—in mathematical economics, in the theory of control processes, in network theory, and in trajectory processes. Here we wish to consider a type of question which arises in the design of computers and switching circuits. We shall first treat the problem in general terms, and then consider a special example to illustrate the methods.
Silicon Photonic Circuits: On-CMOS Integration, Fiber Optical Coupling, and Packaging
Silicon photonics is a new technology that should at least enable electronics and optics to be integrated on the same optoelectronic circuit chip, leading to the production of low-cost devices on silicon wafers by using standard processes from the microelectronics industry. In order to achieve real-low-cost devices, some challenges need to be taken up concerning the integration technological process of optics with electronics and the packaging of the chip. In this paper, we review recent progress in the packaging of silicon photonic circuits from on-CMOS wafer-level integration to the single-chip package and input/output interconnects. We focus on optical fiber-coupling structures comparing edge and surface couplers. In the following, we detail optical alignment tolerances for both coupling architecture, discussing advantages and drawbacks from the packaging process point of view. Finally, we describe some achievements involving advanced-packaging techniques.
Artificial Neural Networks Approach to the Forecast of Stock Market Price Movements
In this work we present an Artificial Neural Network (ANN) approach to predict stock market indices. In particular, we focus our attention on their trend movement up or down. We provide results of experiments exploiting different Neural Networks architectures, namely the Multi-layer Perceptron (MLP), the Convolutional Neural Networks (CNN), and the Long Short-Term Memory (LSTM) recurrent neural networks technique. We show importance of choosing correct input features and their preprocessing for learning algorithm. Finally we test our algorithm on the S&P500 and FOREX EUR/USD historical time series, predicting trend on the basis of data from the past n days, in the case of S&P500, or minutes, in the FOREX framework. We provide a novel approach based on combination of wavelets and CNN which outperforms basic neural networks approaches. Key–Words: Artificial neural networks, Multi-layer neural network, Convolutional neural network, Long shortterm memory, Recurrent neural network, Deep Learning, Stock markets, Time series analysis, financial forecasting
Machine learning for neural decoding
Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Improving the performance of neural decoding algorithms allows us to better understand the information contained in a neural population, and can help advance engineering applications such as brain machine interfaces. Here, we apply modern machine learning techniques, including neural networks and gradient boosting, to decode from spiking activity in 1) motor cortex, 2) somatosensory cortex, and 3) hippocampus. We compare the predictive ability of these modern methods with traditional decoding methods such as Wiener and Kalman filters. Modern methods, in particular neural networks and ensembles, significantly outperformed the traditional approaches. For instance, for all of the three brain areas, an LSTM decoder explained over 40% of the unexplained variance from a Wiener filter. These results suggest that modern machine learning techniques should become the standard methodology for neural decoding. We provide a tutorial and code to facilitate wider implementation of these methods. Introduction: Neural decoding uses activity recorded from the brain to make predictions about variables in the outside world. For example, researchers predict movements based on activity in motor cortex [1, 2], predict decisions based on activity in prefrontal and parietal cortices [3, 4], and predict locations based on activity in the hippocampus [5, 6]. There are two primary purposes of decoding. First, it is an increasingly critical tool for understanding how neural signals relate to the outside world. It can be used to determine how much information the brain contains about an external variable (e.g., sensation or movement) [7-9], and how this information differs across brain areas [10-12], experimental conditions [13, 14], disease states [15], and more. Second, it is useful in engineering contexts, such as for brain machine interfaces (BMIs), where signals from motor cortex are used to control computer cursors [1], robotic arms [16], and muscles [2]. Decoding is a central tool for neural data analysis. When predicting a continuous variable, decoding is simply a regression problem and when predicting a discrete variable, decoding is simply a classification problem. Thus, there are many methods that can be used for neural decoding. However, despite the recent advances in machine learning techniques, it is still common to use traditional methods such as linear regression. Using modern machine learning tools for neural decoding would likely significantly boost performance, and might allow deeper insights into neural function. Here, we first give a brief tutorial so that readers can get started with using standard machine learning methods for decoding. We provide companion code so that readers can easily use a variety of decoding methods. Next, we compare the performance of many different machine learning methods to decode information from neural spiking activity. We predict movement velocities from macaque motor cortex and sensorimotor cortex, and locations in space from rat hippocampus. In all brain regions, modern methods, in particular neural networks and ensembles, led to the highest accuracy decoding, even for limited amounts of data. Tutorial for getting started with using machine learning for decoding: Code We have made Python code available at https://github.com/KordingLab/Neural_Decoding, which accompanies the tutorial below. This includes code that will correctly format the neural and output data for decoding, a tutorial for hyperparameter optimization, and examples of using many different decoders. We go into more detail on these topics below. General framework for decoding The decoding problem we are considering can be summarized as follows. We have N neurons whose spiking activity is recorded for a period of time, T (Fig. 1a). While we focus here on spiking neurons, the same methods could be used with other forms of neural data, such as the BOLD activity of N voxels, or the power in particular frequency bands of N LFP or EEG signals. We have also recorded outputs that we are trying to predict over that same time period (Fig. 1a). Here, we focus on output variables that are continuous (e.g., velocity, position), rather than discrete (e.g., choice). However, the general framework is very similar for discrete output variables. The first choice we need to make is to decide the temporal resolution, R, for decoding. That is, do we want to make a prediction every 50ms, 100ms, etc? We need to put the input and output into bins of length R (Fig. 1a). It is common (although not necessary) to use the same bin size for the neural data and output data, and we do so here. Thus, we will have approximately T/R total bins of neural activity and outputs. Within each bin, we compute the average activity of all neurons and the average value of the output. Next, we need to choose the time period of neural activity used to predict a given output. In the simplest case, the activity from all neurons in a given time bin would be used to predict the output in that same time bin. However, it is often the case that we want the neural data to precede the output (e.g., in the case of movements) or follow the decoder output (e.g., in the case of sensation). Plus, we often want to use neural data from more than one bin (e.g., using 500 ms of preceding neural data to predict a movement in the current 50 ms bin). In the following, we use the nomenclature that B time bins of neural activity are being used to predict a given output. For example, if we use one bin preceding the output, one concurrent bin, and one following bin, then B=3 (Fig. 1a). Note that when multiple bins of neural data are used to predict an output (B>1), then overlapping neural data will be used to predict different output times (Fig. 1a). When multiple bins of neural data are used to predict an output, then we will need to exclude some output bins. For instance, if we are using one bin of neural data preceding the output, then we cannot predict the first output bin, and if we are using one bin of neural data following the output, then we cannot predict the final output bin (Fig. 1a). Thus, we will be predicting K total output bins, where K is less than the total number of bins (T/R). To summarize, our decoders will be predicting each of these K outputs using B surrounding bins of activity from N neurons. Below, we describe how to format the neural data and output variables for use in different types of decoders. Non-recurrent decoders: For many “non-recurrent” decoders, we are just solving a standard machine learning regression problem. We have N x B features (the firing rates of each neuron in each relevant time bin) that are used to predict each output (Fig. 1b). If there is a single output that is being predicted, it can be put in a vector, Y, of length K. Note that for many decoders, if there are multiple outputs, each is independently decoded. If multiple outputs are being simultaneously predicted, which can occur with neural network decoders, the outputs can be put in a matrix Y, that has K rows and d columns, where d is the number of outputs being predicted. The input covariate matrix, X, has N x B columns (one for each feature) and K rows (corresponding to each output being predicted). This is now the format of a standard regression problem. Linear regression simply finds a linear combination of these features that predicts the output. More sophisticated forms of regression use nonlinear combinations of features for predictions. In general, this format is beneficial because there are many machine learning regression techniques that can easily be substituted for one another. We provide code for a Wiener filter (linear regression), a Wiener cascade (a linear-nonlinear model), support vector regression, XGBoost (gradient boosted trees), and feedforward neural networks (see Methods). We test the performance of these decoders in Results. Recurrent neural network decoders: When using recurrent neural networks (RNNs) for decoding, we need to put the inputs in a different format. Recurrent neural networks explicitly model temporal transitions across time (Fig. 1c). In the non-recurrent decoders, there were N x B features that were equivalently used for prediction, regardless of the time bin they came from. However, with a recurrent decoder, at each time bin, N features (the firing rates of all neurons in that time bin) are used for predicting the hidden state of the system at that time. Along with being a function of the N features, the hidden state at a time bin is also a function of the hidden state at the previous time bin (Fig. 1c). After transitioning through all B bins, the hidden state in this final bin is used to predict the output. This architecture allows the decoder to take advantage of temporal structure in the data, and allowing it (via its hidden state) to integrate the effect of neural inputs over an extended period of time. For use in this type of decoder, the input can be formatted as a 3-dimensional matrix of size K x N x B (Fig. 1c). That is, for each row (corresponding to the output that is predicted), there will be N features (2nd matrix dimension) over B bins (3rd matrix dimension) used for prediction. Within this format, different types of RNNs, including those more sophisticated than the standard RNN shown in Fig. 1c, can be easily switched for one another. We provide code for a standard recurrent network, a gated recurrent unit (GRU) network, and a long short-term memory (LSTM) network. In Results, we test the performance of these decoders. Decoders with additional information: While the focus of this tutorial is on decoders that fit into standard machine learning frameworks, we want to briefly mention two other commonly used decoders. The Kalman filter and its variants have frequently been used in the brain computer interface field fo
Fault-tolerant control design with respect to actuator health degradation: An LMI approach
The active fault-tolerant control approach relies heavily on the occurred faults. In order to improve the safety of the reconfigurable system, a methodology to incorporate actuator health in the fault-tolerant control design is proposed for a tracking problem. Indeed, information about actuator health degradation due to the applied control is considered in addition to fault estimation. The main objective is to design a fault-tolerant control system which guarantees a high overall system reliability and dependability both in nominal operation and in the presence of faults. Such an objective is achieved by a control performance index, which is proposed based on system reliability analysis. The fault-tolerant controller is synthesized by using a linear matrix inequality approach.
On Computing Minimal Correction Subsets
A set of constraints that cannot be simultaneously satisfied is over-constrained. Minimal relaxations and minimal explanations for over-constrained problems find many practical uses. For Boolean formulas, minimal relaxations of over-constrained problems are referred to as Minimal Correction Subsets (MCSes). MCSes find many applications, including the enumeration of MUSes. Existing approaches for computing MCSes either use a Maximum Satisfiability (MaxSAT) solver or iterative calls to a Boolean Satisfiability (SAT) solver. This paper shows that existing algorithms for MCS computation can be inefficient, and so inadequate, in certain practical settings. To address this problem, this paper develops a number of novel techniques for improving the performance of existing MCS computation algorithms. More importantly, the paper proposes a novel algorithm for computing MCSes. Both the techniques and the algorithm are evaluated empirically on representative problem instances, and are shown to yield the most efficient and robust solutions for MCS computation.
A Cognitive Process Theory of Writing
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
Online Resource Scheduling Under Concave Pricing for Cloud Computing
With the booming cloud computing industry, computational resources are readily and elastically available to the customers. In order to attract customers with various demands, most Infrastructure-as-a-service (IaaS) cloud service providers offer several pricing strategies such as pay as you go, pay less per unit when you use more (so called volume discount), and pay even less when you reserve. The diverse pricing schemes among different IaaS service providers or even in the same provider form a complex economic landscape that nurtures the market of cloud brokers. By strategically scheduling multiple customers' resource requests, a cloud broker can fully take advantage of the discounts offered by cloud service providers. In this paper, we focus on how a broker can help a group of customers to fully utilize the volume discount pricing strategy offered by cloud service providers through cost-efficient online resource scheduling. We present a randomized online stack-centric scheduling algorithm (ROSA) and theoretically prove the lower bound of its competitive ratio. Three special cases of the offline concave cost scheduling problem and the corresponding optimal algorithms are introduced. Our simulation shows that ROSA achieves a competitive ratio close to the theoretical lower bound under the special cases. Trace-driven simulation using Google cluster data demonstrates that ROSA is superior to the conventional online scheduling algorithms in terms of cost saving.
Complete Solution Classification for the Perspective-Three-Point Problem
In this paper, we use two approaches to solve the Perspective-Three-Point (P3P) problem: the algebraic approach and the geometric approach. In the algebraic approach, we use Wu-Ritt’s zero decomposition algorithm to give a complete triangular decomposition for the P3P equation system. This decomposition provides the first complete analytical solution to the P3P problem. We also give a complete solution classification for the P3P equation system, i.e., we give explicit criteria for the P3P problem to have one, two, three, and four solutions. Combining the analytical solutions with the criteria, we provide an algorithm, CASSC, which may be used to find complete and robust numerical solutions to the P3P problem. In the geometric approach, we give some pure geometric criteria for the number of real physical solutions.
Ranking opinionated blog posts using OpinionFinder
The aim of an opinion finding system is not just to retrieve relevant documents, but to also retrieve documents that express an opinion towards the query target entity. In this work, we propose a way to use and integrate an opinion-identification toolkit, OpinionFinder, into the retrieval process of an Information Retrieval (IR) system, such that opinionated, relevant documents are retrieved in response to a query. In our experiments, we vary the number of top-ranked documents that must be parsed in response to a query, and investigate the effect on opinion retrieval performance and required parsing time. We find that opinion finding retrieval performance is improved by integrating OpinionFinder into the retrieval system, and that retrieval performance grows as more posts are parsed by OpinionFinder. However, the benefit eventually tails off at a deep rank, suggesting that an optimal setting for the system has been achieved.
Comparative experimental study on the COD removal in aqueous solution of pesticides by the electrocoagulation process using monopolar iron electrodes
ISSN: 2410-9649 Benahmed Daij et al / Chemistry International 3(4) (2017) 420-427 iscientic.org. 421 www.bosaljournals/chemint/ [email protected] 1996), industrial wastewater (Lin et al., 1998), and landfill leachate (Lin and Chang, 2000). It was also used to remove phenol (Awad and Abuzaid, 2000) and surfactants (Ciorba et al., 2000) from industrial wastewaters. Although electrochemical coagulation has been utilized for over a century, the available literature reveals little studies on the removal of herbicides by electrochemical coagulation such as Malathion (Pitulice et al., 2013), methyl parathion, atrazine and triazophos (Babu et al., 2011), Malathion, imidacloprid and Chlorpyrifos (Nasser and Nader, 2015a) and abamectin (Nasser and Nader, 2015b).The feedback on these electrodes produces metal hydroxides: Anode: 4Fe(S) → 4Fe2+ (aq) + 8e-(1) Fe2+ (aq) + 2OH-(aq) → Fe (OH) 2(S) (2) Cathode: 2H2O (l) + 2e→ H2 (g)+ 2OH-(aq) (3) Fe2+ ions are oxidized to Fe3+ ions by dissolved oxygen and there is the formation of ferric hydroxide Fe (OH) 3(S) color rust, according to the reaction: 4Fe2+(aq) + 10H2O(l) + O2(g) → 4Fe(OH)3(S) + 8H+ (4) The hydrogen thus generated is therefore involved in the flotation of the flocks and thereby promotes both elimination of suspended solids that the removal of dissolved organic compounds adsorbed partly on the flocks. Iron hydroxides formed Fe (OH) n(S) where n is 2 or 3 (equations 2 and 4) remain in the aqueous solution as a gelatinous suspension which can remove pollutants from the waste water (Ibanez et al., 1998; Xinhua et Xiangfeng, 2004).Other hydrated forms of the ion Fe3 +, pH dependent, have been suggested (Kobya et al., 2003): Fe(H2O)23+, Fe(H2O)5OH2+, Fe(OH)2+, Fe(OH)2+, Fe2(OH)24+, Fe(OH)4-, Fe(OH)63-, Fe(H2O)4(OH)2+, Fe2(H2O)8(OH)24+, Fe2(H2O)6(OH)42+ . These complexes act as coagulant. They are adsorbed on the particles, and so cancel the colloidal fillers. The aim of this work is a comparative experimental study on the COD removal in aqueous solutions of the following pesticides, Chlorpyrifos-Ethyl48EC, Fenitrothion 3% and Acetamiprid20% SP by electrocoagulation process using sacrificial anodes Iron. MATERIAL AND METHODS
Reaching out: involving Users in Innovation Tasks through Social Media
Integrating social media into the innovation process can open up the potential for organizations to utilize the collective creativity of consumers from all over the world. The research in this paper sets out to identify how social media can facilitate innovation. By taking a Design Science Research approach this research presents the Social Media Innovation Method for matching innovation tasks with social media characteristics. This supports the selection of best suitable social media and can help organizations to achieve their innovation goals. At the core of the method is the honeycomb model which describes seven social media characteristics on three dimensions: audience, content and time. The method has been evaluated by using an approach called scenario walkthrough that is applied in a real-life spatial planning project. This research concludes that there is no one-size-fits-all answer to the question how social media can be of value for the innovation process. However, organizations that want to know how it can benefit their own innovation process can use the Social Media Innovation Method presented in this research as a way to provide an answer to that question, uniquely tailored to each innovation task for which social media is to be used.
Developing an IT Maturity Model for Sustainable Supply Chain Management Implementation
Many organisations are currently involved in implementing Sustainable Supply Chain Management (SSCM) initiatives to address societal expectations and government regulations. Implementation of these initiatives has in turn created complexity due to the involvement of collection, management, control, and monitoring of a wide range of additional information exchanges among trading partners, which was not necessary in the past. Organisations thus would rely more on meaningful support from their IT function to help them implement and operate SSCM practices. Given the growing global recognition of the importance of sustainable supply chain (SSC) practices, existing corporate IT strategy and plans need to be revisited for IT to remain supportive and aligned with new sustainability aspirations of their organisations. Towards this goal, in this paper we report on the development of an IT maturity model specifically designed for SSCM context. The model is built based on four dimensions derived from software process maturity and IS/IT planning literatures. Our proposed model defines four progressive IT maturity stages for corporate IT function to support SSCM implementation initiatives. Some implications of the study finding and several challenges that may potentially hinder acceptance of the model by organisations are discussed.
Dielectrophoretic investigations of sub-micrometre latex spheres
A non-uniform AC electric field induces a motion in polarizable particles, called dielectrophoresis. The force responsible for this motion is governed by the dielectric properties both of the suspending medium and of the particles, as well as the geometry of the field. The dielectrophoretic properties of sub-micrometre latex spheres have been studied using micro-fabricated electrode structures. The electric field geometry for electrodes used in the measurements has been solved using numerical analysis. Measurements of the dielectrophoretic properties of the spheres have been made over a range of medium conductivities and applied field frequencies and strengths. Comparisons between the observed behaviour and that expected from theory are presented.
Trust as a Commodity
Trust is central to all transactions and yet economists rarely discuss the notion. It is treated rather as background environment, present whenever called upon, a sort of ever-ready lubricant that permits voluntary participation in production and exchange. In the standard model of a market economy it is taken for granted that consumers meet their budget constraints: they are not allowed to spend more than their wealth. Moreover, they always deliver the goods and services they said they would. But the model is silent on the rectitude of such agents. We are not told if they are persons of honour, conditioned by their upbringing always to meet the obligations they have chosen to undertake, or if there is a background agency which enforces contracts, credibly threatening to mete out punishment if obligations are not fulfilled a punishment sufficiently stiff to deter consumers from ever failing to fulfil them. The same assumptions are made for producers. To be sure, the standard model can be extended to allow for bankruptcy in the face of an uncertain future. One must suppose that there is a special additional loss to becoming bankrupt a loss of honour when honour matters, social and economic ostracism, a term in a debtors’ prison, and so forth. Otherwise, a person may take silly risks or, to make a more subtle point, take insufficient care in managing his affairs, but claim that he ran into genuine bad luck, that it was Mother Nature’s fault and not his own lack of ability or zeal.
The feasibility and acceptability of a diet and exercise trial in overweight and obese black breast cancer survivors: The Stepping STONE study.
PURPOSE Black breast cancer survivors have high rates of obesity and low physical activity levels. Little is known about the acceptability and feasibility of interventions in this population. OBJECTIVE A two-arm RCT was launched to assess the efficacy of a culturally targeted 12-week multimodal lifestyle intervention in overweight and obese black survivors. METHODS Intervention components included nutrition education, exercise groups, and survivor-led motivational interviewing phone sessions. The analytic sample included women who completed the trial (intervention n=10; control n=12). Anthropometric measures, physical activity, and VO2max were assessed at baseline and follow-up. Change scores (intervention vs. control) were assessed with Wilcoxon rank-sum tests. A process evaluation assessed intervention acceptability. RESULTS Overall adherence was 70% and overall satisfaction was high (86%). Despite the 5% weight loss target, the intervention group lost 0.8% but BMI improved. Total physical activity levels increased in the intervention vs. control arm (+3501METmin/week vs. +965METmin/week, respectively). VO2max improved in the intervention group (+0.10±1.03kg/L/min). Intervention participants reduced energy intake (-207.3±31.5kcals) and showed improvements in fat intake (-15.5±3.8g), fiber (+3.2±1.2g) and % energy from fat (-4.8±3.1%). Survivors suggested providing diet/exercise information within a cancer context. CONCLUSIONS Group and individualized intervention strategies are acceptable to black survivors. Observed differences between self-report and objective outcomes may suggest reporting bias or changes in body composition. Increasing supervised intervention components and assessment of body composition will be important for future trials.
165-GHz Transceiver in SiGe Technology
Two D-band transceivers, with and without amplifiers and static frequency divider, transmitting simultaneously in the 80-GHz and 160-GHz bands, are fabricated in SiGe HBT technology. The transceivers feature an 80-GHz quadrature Colpitts oscillator with differential outputs at 160 GHz, a double-balanced Gilbert-cell mixer, 170-GHz amplifiers and broadband 70-GHz to 180-GHz vertically stacked transformers for single-ended to differential conversion. For the transceiver with amplifiers and static frequency divider, which marks the highest level of integration above 100 GHz in silicon, the peak differential down-conversion gain is -3 dB for RF inputs at 165 GHz. The single-ended, 165-GHz transmitter output generates -3.5 dBm, while the 82.5-GHz differential output power is +2.5 dBm. This transceiver occupies 840 mum times 1365 mum, is biased from 3.3 V, and consumes 0.9 W. Two stand-alone 5-stage amplifiers, centered at 140 GHz and 170 GHz, were also fabricated showing 17 dB and 15 dB gain at 140 GHz and 170 GHz, respectively. The saturated output power of the amplifiers is +1 dBm at 130 GHz and 0 dBm at 165 GHz. All circuits were characterized over temperature up to 125degC. These results demonstrate for the first time the feasibility of SiGe BiCMOS technology for circuits in the 100-180-GHz range.
Road marking detection using LIDAR reflective intensity data and its application to vehicle localization
A correct perception of road signalizations is required for autonomous cars to follow the traffic codes. Road marking is a signalization present on road surfaces and commonly used to inform the correct lane cars must keep. Cameras have been widely used for road marking detection, however they are sensible to environment illumination. Some LIDAR sensors return infrared reflective intensity information which is insensible to illumination condition. Existing road marking detectors that analyzes reflective intensity data focus only on lane markings and ignores other types of signalization. We propose a road marking detector based on Otsu thresholding method that make possible segment LIDAR point clouds into asphalt and road marking. The results show the possibility of detecting any road marking (crosswalks, continuous lines, dashed lines). The road marking detector has also been integrated with Monte Carlo localization method so that its performance could be validated. According to the results, adding road markings onto curb maps lead to a lateral localization error of 0.3119 m.
Krishi Ville — Android based solution for Indian agriculture
Information and Communication Technology (ICT) in agriculture is an emerging field focusing on the enhancement of agricultural and rural development in India. It involves innovative applications using ICT in the rural domain. The advancement of ICT can be utilized for providing accurate and timely relevant information and services to the farmers, thereby facilitating an environment for remunerative agriculture. This paper describes a mobile based application for farmers which would help them in their farming activities. We propose an android based mobile application — Krishi Ville which would take care of the updates of the different agricultural commodities, weather forecast updates, agricultural news updates. The application has been designed taking indian farming in consideration.
The study on the epidemiology of psychological, alimentary health and nutrition (SEPAHAN): Overview of methodology
1 Integrative Functional Gastroenterology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran. 2 Food Security Research Center And Department of Community Nutrition, School of Nutrition and Food Science, Isfahan University of Medical Sciences, Isfahan, Iran. 3 Psychosomatic Research Center, Isfahan University of Medical Sciences, Isfahan, Iran. 4 Behavioral Sciences Research Center, Isfahan University of Medical Sciences, Isfahan, Iran. 5 Integrative Functional Gastroenterology Research Center And Psychosomatic Research Center, Isfahan University of Medical Sciences, Isfahan, Iran. 6 University of Adelaide, Discipline of Medicine, Adelaide, Australia. 7 Discipline of Psychiatry, University of Sydney And Department of Psychiatry, Westmead Hospital, Sydney, New South Wales, Australia. 8 Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, MN, United States And Faculty of Health, University of Newcastle, Callaghan, NSW 2308, Australia.
Threat and Defense : From Anxiety to Approach
The social psychological literature on threat and defense is fragmented. Groups of researchers have focused on distinct threats, such as mortality, uncertainty, uncontrollability, or meaninglessness, and have developed separate theoretical frameworks for explaining the observed reactions. In the current chapter, we attempt to integrate old and new research, proposing both a taxonomy of variation and a common motivational process underlying people’s reactions to threats. Following various kinds of threats, people often turn to abstract conceptions of reality—they invest more extremely in belief systems and worldviews, social identities, goals, and ideals. We suggest that there are common motivational processes that underlie the similar reactions to all of these diverse kinds of threats. We propose that (1) all of the threats present people with discrepancies that immediately activate basic neural processes related to anxiety. (2) Some categories of defenses are more proximal and symptom-focused, and result directly from anxious arousal and heightened attentional vigilance associated with anxious states. (3) Other kinds of defenses operatemore distally andmute anxiety by activating approach-oriented states. (4) Depending on the salient dispositional and situational affordances, these distal, approach-oriented reactions vary in the extent to which they (a) resolve the original discrepancy or are merely palliative; (b) are concrete or abstract; (c) are personal or social. We present results from social neuroscience and standard social psychological experiments that converge on a general process model of threat and defense. Various “threats,” such as personal uncertainty, mortality salience, loss of control, perceptual surprises, and goal conflicts, cause people to heighten commitment to their goals, ideals, social relations, identifications, ideologies, and worldviews. Why do such seemingly unrelated threats lead to this similar set of diverse reactions? We and others have investigated phenomena such as the ones listed above for many years under different theories of threat and defense. In this chapter, we describe how our various research programs converge to provide an integrative general model of threat and defense processes. Although different approaches have offered different conceptual frameworks to understand threat and defense, a shared process model seems possible if we look at these phenomena from both social psychological and neural perspectives. Defensive reactions to threat follow a specific time course and can be mapped onto neural, experiential, and behavioral correlates. We propose that all threats involve the experience of a discrepancy. This discrepancy subsequently activates neural processes related to anxiety, driving a variety of proximal defenses related to attentional vigilance and avoidance motivation. Subsequent distal defenses then serve to activate neural processes related to approach motivation that downregulate the neural processes related to anxiety. We argue that depending on individual traits and salient associations and norms, people use an array of defensive strategies to activate these sanguine, approach-oriented states. In this chapter, we temporarily set aside the long-standing debate about the way different threats might affect different psychological needs (symbolic immortality, control, self-worth, certainty, self-integrity, meaning, etc.) and how different kinds of defenses might restore them. Instead, we build on the simple hypothesis that discrepancies arouse anxiety and thereby motivate diverse phenomena that activate approach-related states that relieve the anxiety. 1. THEORIES EXPLAINING PEOPLE’S DEFENSIVE REACTIONS TO THREAT Social psychological research on threat and defense first proliferated with cognitive dissonance theory (CDT; Festinger, 1957), which focused on the aversive arousal arising from discrepant experiences that conflict with relevant cognitions (e.g., smoking despite knowledge of its dangers; engaging in counter-attitudinal behavior). Conflicting thoughts and actions are still considered the basis of dissonance arousal (Gawronski, 2012; Harmon-Jones & Mills, 1999). In the current threat and defense literature, cognitive dissonance themes persist across the various theoretical perspectives and form a central element in our integrative model. Specifically, we hold that any experience that is discrepant with prevailing cognitions or motivations arouses anxious vigilance and motivates efforts to reduce this arousal by means of reactive thoughts and behaviors. In the first part of this chapter, before explicating our general process model, we will provide perspective by reviewing some prominent theories that have tried to account for diverse defensive reactions to threats. 1.1. Theories focusing on need for certainty, self-esteem, and social identity A variety of social psychological theories evolved from CDT to focus on uncertainty-related threats. Like CDT, these certainty theories emphasize the need to supplant aversive, “nonfitting cognitions” with consonant ones, and focus on need for cognitive clarity and consistency. Lay epistemic theory (Kruglanski & Webster, 1996), self-verification theory (Swann & Read, 1981), and theories of uncertainty management (Van den Bos, Poortvliet, Maas, Miedema, & Van den Ham, 2005), compensatory conviction (McGregor, Zanna, Holmes, & Spencer, 2001), and uncertainty reduction (Hogg, 2007) emphasize that this need for self-relevant clarity and cognitive closure is bolstered by consensual social validation and identification. When faced with uncertainty about themselves or their environment, people defensively restore certainty, often in unrelated domains with the confidence-inducing help of social consensus and group identification (Hogg, 2007; Kruglanski, Pierro, Mannetti, & De Grada, 2006). For example, personal uncertainty threats increase in-group identification, in-group bias, defense of cultural worldviews, and exaggerated consensus estimates (Hogg, Sherman, Dierselhuis, Maitner, & Moffitt, 2007; McGregor, Nail, Marigold, & Kang, 2005; McGregor et al., 2001; Van den Bos, 2009). At around the same time as consistency theories were proliferating, another family of theories, rooted in neo-analytic ideas of ego-defense (Freud, 1967; Horney, 1945), gained popularity. These theories focus on self-worth and ego-needs. They emphasize self-esteem as the fundamental resource that people protect with compensatory defenses and include theories of egocentricity (Beauregard & Dunning, 1998; Dunning & Hayes, 1996; Tesser, 2000), self-evaluation maintenance (Sedikides, 1993; Tesser, 1988), and the totalitarian ego (Greenwald, 1980). Consensual social validation and identification was also often viewed as playing an important role in the maintenance of self-esteem through others, for example, basking in reflected glory (Cialdini et al., 1976), or being part of a winning team (Sherman & Kim, 2005). The close linkage (Baumgardner, 1990; Campbell, 1990) and substitutability of self-clarity and self-esteem was taken by self-affirmation theory (Steele, 1988) as evidence for a more general motive for self-integrity—a sense of the “moral and adaptive adequacy of the self.” If an experience undermines self-viability for whatever reason, then defensive compensatory efforts will be recruited in any available domain of clarity or worth, even relating to group memberships (Fein & Spencer, 1997), to restore a positive
Downregulation of δ opioid receptor by RNA interference enhances the sensitivity of BEL/FU drug-resistant human hepatocellular carcinoma cells to 5-FU
δ opioid receptor (DOR) was the first opioid receptor of the G protein‑coupled receptor family to be cloned. Our previous studies demonstrated that DOR is involved in regulating the development and progression of human hepatocellular carcinoma (HCC), and is involved in the regulation of the processes of invasion and metastasis of HCC cells. However, whether DOR is involved in the development and progression of drug resistance in HCC has not been reported and requires further elucidation. The aim of the present study was to investigate the expression levels of DOR in the drug‑resistant HCC BEL‑7402/5‑fluorouracil (BEL/FU) cell line, and its effects on drug resistance, in order to preliminarily elucidate the effects of DOR in HCC drug resistance. The results of the present study demonstrated that DOR was expressed at high levels in the BEL/FU cells, and the expression levels were higher, compared with those in normal liver cells. When the expression of DOR was silenced, the proliferation of the drug‑resistant HCC cells were unaffected. However, when the cells were co‑treated with a therapeutic dose of 5‑FU, the proliferation rate of the BEL/FU cells was significantly inhibited, a large number of cells underwent apoptosis, cell cycle progression was arrested and changes in the expression levels of drug‑resistant proteins were observed. Overall, the expression of DOR was upregulated in the drug‑resistant HCC cells, and its functional status was closely associated with drug resistance in HCC. Therefore, DOR may become a recognized target molecule with important roles in the clinical treatment of drug‑resistant HCC.
Automatic Human Animation for Non-Humanoid 3D Characters
In this paper, we propose a system, that automatically transfers human body motion captured from an ordinary video camera to an unknown 3D character mesh. In our system, no manual intervention is required for specifying the internal skeletal structure or defining how the mesh surfaces deform. A sparse graph is generated from the input polygons based on their connectivity and geometric distributions. To estimate articulated body parts in the video, a progressive particle filter is used for identifying correspondences. We anticipate our proposed system can bring animation to a new audience with a more intuitive user interface.
Improved seam carving for video retargeting
Video, like images, should support content aware resizing. We present video retargeting using an improved seam carving operator. Instead of removing 1D seams from 2D images we remove 2D seam manifolds from 3D space-time volumes. To achieve this we replace the dynamic programming method of seam carving with graph cuts that are suitable for 3D volumes. In the new formulation, a seam is given by a minimal cut in the graph and we show how to construct a graph such that the resulting cut is a valid seam. That is, the cut is monotonic and connected. In addition, we present a novel energy criterion that improves the visual quality of the retargeted images and videos. The original seam carving operator is focused on removing seams with the least amount of energy, ignoring energy that is introduced into the images and video by applying the operator. To counter this, the new criterion is looking forward in time - removing seams that introduce the least amount of energy into the retargeted result. We show how to encode the improved criterion into graph cuts (for images and video) as well as dynamic programming (for images). We apply our technique to images and videos and present results of various applications.
Connective-Tissue Growth Factor (CTGF/CCN2) Induces Astrogenesis and Fibronectin Expression of Embryonic Neural Cells In Vitro
Connective-tissue growth factor (CTGF) is a modular secreted protein implicated in multiple cellular events such as chondrogenesis, skeletogenesis, angiogenesis and wound healing. CTGF contains four different structural modules. This modular organization is characteristic of members of the CCN family. The acronym was derived from the first three members discovered, cysteine-rich 61 (CYR61), CTGF and nephroblastoma overexpressed (NOV). CTGF is implicated as a mediator of important cell processes such as adhesion, migration, proliferation and differentiation. Extensive data have shown that CTGF interacts particularly with the TGFβ, WNT and MAPK signaling pathways. The capacity of CTGF to interact with different growth factors lends it an important role during early and late development, especially in the anterior region of the embryo. ctgf knockout mice have several cranio-facial defects, and the skeletal system is also greatly affected due to an impairment of the vascular-system development during chondrogenesis. This study, for the first time, indicated that CTGF is a potent inductor of gliogenesis during development. Our results showed that in vitro addition of recombinant CTGF protein to an embryonic mouse neural precursor cell culture increased the number of GFAP- and GFAP/Nestin-positive cells. Surprisingly, CTGF also increased the number of Sox2-positive cells. Moreover, this induction seemed not to involve cell proliferation. In addition, exogenous CTGF activated p44/42 but not p38 or JNK MAPK signaling, and increased the expression and deposition of the fibronectin extracellular matrix protein. Finally, CTGF was also able to induce GFAP as well as Nestin expression in a human malignant glioma stem cell line, suggesting a possible role in the differentiation process of gliomas. These results implicate ctgf as a key gene for astrogenesis during development, and suggest that its mechanism may involve activation of p44/42 MAPK signaling. Additionally, CTGF-induced differentiation of glioblastoma stem cells into a less-tumorigenic state could increase the chances of successful intervention, since differentiated cells are more vulnerable to cancer treatments.
Vertex reconstruction of neutrino interactions using deep learning
Deep learning offers new tools to improve our understanding of many important scientific problems. Neutrinos are the most abundant particles in existence and are hypothesized to explain the matter-antimatter asymmetry that dominates our universe. Definitive tests of this conjecture require a detailed understanding of neutrino interactions with a variety of nuclei. Many measurements of interest depend on vertex reconstruction — finding the origin of a neutrino interaction using data from the detector, which can be represented as images. Traditionally, this has been accomplished by utilizing methods that identify the tracks coming from the interaction. However, these methods are not ideal for interactions where an abundance of tracks and cascades occlude the vertex region. Manual algorithm engineering to handle these challenges is complicated and error prone. Deep learning extracts rich, semantic features directly from raw data, making it a promising solution to this problem. In this work, deep learning models are presented that classify the vertex location in regions meaningful to the domain scientists improving their ability to explore more complex interactions.
A 96-GHz Ortho-Mode Transducer for the Polatron
We describe the design, simulation, fabrication, and performance of a 96-GHz ortho-mode transducer (OMT) to be used for the Polatron—a bolometric receiver with polarization capability. The OMT has low loss, good isolation, and moderately broad bandwidth, and its performance closely resembles simulation results.
Skill Inference with Personal and Skill Connections
Personal skill information on social media is at the core of many interesting applications. In this paper, we propose a factor graph based approach to automatically infer skills from personal profile incorporated with both personal and skill connections. We first extract personal connections with similar academic and business background (e.g. co-major, co-university, and co-corporation). We then extract skill connections between skills from the same person. To well integrate various kinds of connections, we propose a joint prediction factor graph (JPFG) model to collectively infer personal skills with help of personal connection factor, skill connection factor, besides the normal textual attributes. Evaluation on a large-scale dataset from LinkedIn.com validates the effectiveness of our approach.
Low-Load Bench Press Training to Fatigue Results in Muscle Hypertrophy Similar to High-Load Bench Press Training
The purpose of this study was to determine whether the training responses observed with low-load resistance exercise to volitional fatigue translates into significant muscle hypertrophy, and compare that response to high-load resistance training. Nine previously untrained men (aged 25 [SD 3] years at the beginning of the study, standing height 1.73 [SD 0.07] m, body mass 68.9 [SD 8.1] kg) completed 6 weeks of high load-resistance training (HL-RT) (75% of one repetition maximal [1RM], 3-sets, 3x/wk) followed by 12 months of detraining. Following this, subjects completed 6 weeks of low load-resistance training (LL-RT) to volitional fatigue (30% 1 RM, 4 sets, 3x/wk). Increases (p < 0.05) in magnetic resonance imaging-measured triceps brachii and pectoralis major muscle cross-sectional areas were similar for both HL-RT (11.9% and 17.6%, respectively) and LL-RT (9.8% and 21.1%, respectively). In addition, both groups increased (p < 0.05) 1RM and maximal elbow extension strength following training; however, the percent increases in 1RM (8.6% vs. 21.0%) and elbow extension strength (6.5% vs. 13.9%) were significantly (p < 0.05) lower with LL-RT. Both protocols elicited similar increases in muscle cross-sectional area, however differences were observed in strength. An explanation of the smaller relative increases in strength may be due to the fact that detraining after HL-RT did not cause strength values to return to baseline levels thereby producing smaller changes in strength. In addition, the results may also suggest that the consistent practice of lifting a heavy load is necessary to maximize gains in muscular strength of the trained movement. These results demonstrate that significant muscle hypertrophy can occur without high-load resistance training and suggests that the focus on percentage of external load as the important deciding factor on muscle hypertrophy is too simplistic and inappropriate.
Plug-in martingales for testing exchangeability on-line
A standard assumption in machine learning is the exchangeability of data, which is equivalent to assuming that the examples are generated from the same probability distribution independently. This paper is devoted to testing the assumption of exchangeability on-line: the examples arrive one by one, and after receiving each example we would like to have a valid measure of the degree to which the assumption of exchangeability has been falsified. Such measures are provided by exchangeability martingales. We extend known techniques for constructing exchangeability martingales and show that our new method is competitive with the martingales introduced before. Finally we investigate the performance of our testing method on two benchmark datasets, USPS and Statlog Satellite data; for the former, the known techniques give satisfactory results, but for the latter our new more flexible method becomes necessary.
Treating a Sex Addict Through Marital Sex Therapy
The treatment of sexual addiction can be viewed within the context of addiction and family therapy. A case of sexual addiction is presented and an intervention model is explicated. The application of the model emphasizes the importance of family participation in the treatment. This article describes the family treatment of a "sex addict." This term was popularized by Patrick Carnes in his highly-publicized book Out of the Shadows: Understanding Sexual Addiction (Carnes, 1983). Whether or not one can precisely define addictive sex, it is clear some people admit they cannot control their sexual behavior to the point it seriously interferes with their lives in terms of health, occupation, or family (Edwards, 1986). Others prefer to label these behavioral patterns as sexual compulsions (Quadland, 1985) or simply as problems of sexual control (Coleman, 1986). Some authorities are concerned the term is too value-laden, nonspecific, and assumes falsely that there is an accepted definition of normal sexual behavior (Coleman, 1986; Edwards, 1986). Carnes (1986) responds by saying people label themselves: "The fact remains that a significant number of people have identified themselves as sexual addicts: people whose sexual behavior has become 'unstoppable' despite serious consequences" (p. 4). Before describing a particular case, the author will make a few remarks about the interface of family therapy and the addiction field generally and about the etiology of sexual addiction specifically.
Interaction Networks for Learning about Objects, Relations and Physics
Reasoning about objects, relations, and physics is central to human intelligence, and 1 a key goal of artificial intelligence. Here we introduce the interaction network, a 2 model which can reason about how objects in complex systems interact, supporting 3 dynamical predictions, as well as inferences about the abstract properties of the 4 system. Our model takes graphs as input, performs objectand relation-centric 5 reasoning in a way that is analogous to a simulation, and is implemented using 6 deep neural networks. We evaluate its ability to reason about several challenging 7 physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. 8 Our results show it can be trained to accurately simulate the physical trajectories of 9 dozens of objects over thousands of time steps, estimate abstract quantities such 10 as energy, and generalize automatically to systems with different numbers and 11 configurations of objects and relations. Our interaction network implementation 12 is the first general-purpose, learnable physics engine, and a powerful general 13 framework for reasoning about object and relations in a wide variety of complex 14 real-world domains. 15
Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems
1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.
Microblogging after a major disaster in China: a case study of the 2010 Yushu earthquake
In this work, we conducted a case study of a popular Chinese microblogging site, Sina-Weibo, to investigate how Chinese netizens used microblogging in response to a major disaster: the 2010 Yushu Earthquake. We combined multiple analysis methods in this case study, including content analysis of microblog messages, trend analysis of different topics, and an analysis of the information spreading process. This study helped us understand the roles played by microblogging systems in response to major disasters and enabled us to gain insight into how to harness the power of microblogging to facilitate disaster response. In addition, this work supplements existing works with an exploration of a non-Western socio-cultural system: how Chinese Internet users used microblogging in disaster response.
A memetic algorithm based extreme learning machine for classification
Extreme Learning Machine (ELM) is an elegant technique for training Single-hidden Layer Feedforward Networks (SLFNs) with extremely fast speed that attracts significant interest recently. One potential weakness of ELM is the random generation of the input weights and hidden biases, which may deteriorate the classification accuracy. In this paper, we propose a new Memetic Algorithm (MA) based Extreme Learning Machine (M-ELM) for classification problems. M-ELM uses Memetic Algorithm which is a combination of population-based global optimization technique and individual-based local heuristic search method to find optimal network parameters for ELM. The optimized network parameters will enhance the classification accuracy and generalization performance of ELM. Experiments and comparisons on 22 benchmark data sets demonstrate that M-ELM is able to provide highly competitive results compared with other state-of-the-art varieties of ELM algorithms.
Structuring Content in the Façade Interactive Drama Architecture
The process of building Façade, a first-person, real-time, one-act interactive drama, has involved three major research efforts: designing ways to deconstruct a dramatic n arrative into a hierarchy of story and behavior pieces; engi neering an AI system that responds to and integrates the playe r’s moment-by-moment interactions to reconstruct a real -time dramatic performance from those pieces; and underst anding how to write an engaging, compelling story within t his new organizational framework. This paper provides an o verview of the process of bringing our interactive drama to life as a coherent, engaging, high agency experience, includi ng the design and programming of thousands of joint dialog behaviors in the reactive planning language ABL, an d their higher level organization into a collection of stor y beats sequenced by a drama manager. The process of itera tively developing the architecture, its languages, authori al id oms, and varieties of story content structures are descr ibed. These content structures are designed to intermix t o offer players a high degree of responsiveness and narrati ve agency. We conclude with design and implementation lessons learned and future directions for creating more generative architectures. Approaching Interactive Story Stories have rich global, temporal structures whose features can vary both in form and pleasure for audiences. Some stories feature tightly-plotted causal chains of ev ents that may, for example, offer audiences the intrigue of a intricate, unfolding mystery, or the spectacle of a n epic historical conflict. By contrast, some stories hav e sparse, even amorphous event structures, that can offer, fo r example, the quieter pleasure of following the subt le progression of emotion between two people. The his tories of literature, theater, cinema and television demon strate that many types of story structures can be pleasura ble for audiences; the challenge for researchers and artist s is determining how traditional story forms can be adap ted for interactivity. Interactive experiences have several identifiable features of their own, such as immersion, agency, and transformation (Murray 1997), each offering particu lar pleasures for interactors, and varying compatibilit y with story. For many artists and researchers, agency is often Copyright © 2005, American Association for Artifici al Intelligence (www.aaai.org). All rights reserved. considered the holy grail of interactive story plea sures, perhaps because it offers players the most substant ial influence on the overall structure of the experienc e. Agency is also the most challenging to implement, e xactly because it requires the system to dynamically assem ble a story structure that incorporates the unpredictable ctions of the player. This suggests that stories with looser, sparser event structures (plots) will be easier to impleme nt in an interactive medium (require less generativity ), and thus should be a good starting point for interactiv e story researchers and artists. Note that such stories ca n be as pleasurable as tightly-plotted ones, just in differ ent ways. When designing an interactive story architecture a nd its content structures, the design choices made will in flue ce the types of stories that can be built within it, a nd greatly affect the likelihood of ultimately creating pleasu rable experiences for players. With this in mind, the Façade architecture was designed with features intended fo r building experiences with igh agency , and with enough narrative intelligence (NI) (Mateas and Sengers 200 2) to construct character-rich, causally-sparse yet coher ent plots. Further, we chose to implement several NI features particular to theatrical drama, a powerful story form historically shown to be compatible with sparse plo ts, compensated for by rich emotional expression from its characters. Additional features therefore importan t to support include characters with a strong sense of immediacy and presence , whose very aliveness results in the audience experiencing a sensation of danger or unpredictability, that anything is possible. This paper presents Façade’s solution to the tension inherent between game and story, some organizing principles allowing us to move away from traditiona l branching narrative structures, and an overview of Façade’s architecture, combined with how its content is structured and applied. We describe Façade’s atomic unit of dramatic performance, the joint dialog behavior, the variety of its applications within the drama, their organization into story beats that afford sparse bu t coherent plots, and their integration with sets of forward-c haining natural language processing (NLP) rules offering pl ayers a high degree of emotional expression. We conclude w ith design and implementation lessons learned, and futu re directions for creating more generative architectur es. All of this discussion is guided by our primary design goal: to create an architecture for, and working example of, high agency, emotionally expressive interactive drama. Resolving Game Versus Story Today’s most pleasurable high agency interactive experiences are games, because the mechanics of gam e agency are well understood and reasonably straightf orward to implement. Player moves such as running, jumpin g or shooting, playing a card, or moving a pawn directly ause scores, stats, levels or abstract game-piece config urat ons to change. (Simulations of physical environments a nd resource-bound systems have more complex state, but can still be represented numerically in understood ways .) However, to date, a high agency interactive story h as yet to be built. Existing game design and technology appro aches, that focus on the feedback loop between player inte ac ion and relatively simple numeric state, seem inappropr iate for modeling the player’s effect on story structure, whose complex global constraints seem much richer than ca n be captured by a set of numeric counters or game piece s. Our solution to this long-time conundrum is to rec ast interactions within a story world in terms of abstr act social games. At a high level, these games are organized around a numeric “score”, such as the affinity between a cha racter and the player. However, unlike traditional games i n which there is a fairly direct connection between player interaction (e.g. pushing a button to fire a gun) a d score state (e.g. a decrease in the health of a monster), in our social games several levels of abstraction may sepa rate atomic player interactions from changes in social “ score”. Instead of jumping over obstacles or firing a gun, i Façade players fire off a variety of discourse acts , in natural language, such as praise, criticism, flirta tion and provocation. While these discourse acts will genera te immediate reactions from the characters, it may tak e storycontext-specific patterns of discourse acts to infl uence the social game score. Further, the score is not direct ly communicated to the player via numbers or sliders, but rather via enriched, theatrically dramatic performa nce. As a friend invited over for drinks at a make-or-b reak moment in the collapsing marriage of the protagonis ts Grace and Trip, the player unwittingly becomes an antagonist of sorts, forced by Grace and Trip into playing psychological “head games” with them (Berne 1964). During the first part of the story, Grace and Trip interpret all of the player’s discourse acts in terms of a ze ro-sum affinity game that determines whose side Trip and Grace currently believe the player to be on. Simultaneous ly, the hot-button game is occurring, in which the player can trigger incendiary topics such as sex or divorce, progressing through tiers to gain more character an d backstory information, and if pushed too far on a t opic, affinity reversals. The second part of the story is organized around the therapy game , where the player is (purposefully or not) potentially increasing each characters’ deg re of self-realization about their own problems, represen ted internally as a series of counters. Additionally, the system keeps track of the overall story tension level , which is affected by player moves in the various social game s. Every change in each game’s state is performed by G race and Trip in emotionally expressive, dramatic ways. On the whole, because their attitudes, levels of self-awar eness, and overall tension are regularly progressing, the expe rience takes on the form and aesthetic of a loosely-plotte d domestic drama. Figure1. Grace and Trip in Façade, viewed from the player's first-person perspective. Richness Through Coherent Intermixing Even with a design solution in hand for resolving t he tension between game and story, an organizing princ i le is required to break away from the constraints of trad i ional branching narrative structures, to avoid the combin atorial explosion that occurs with complex causal event cha ins (Crawford 1989). Our approach to this in Façade is twofold: first, we divide the narrative into multiple fronts of progression, often causally independent, only occasionally interdependent. Second, we build a variety of narrative sequencers to sequence these multiple narrative progressions. These sequencers operate in parallel and can coherently intermix their performances with one ano ther. Façade's architecture and content structure are two sides of the same coin, and will be described in tandem; along the way we will describe how the coherent intermixi ng s achieved. Architecture and Content Structure The Façade architecture consists of characters written in the reactive-planning language ABL, a drama manager that sequences dramatic beats, a forward-chaining rule s ystem for understanding and interpreting natural language nd gestural input from the player, and an animation en gine that performs real-time non-photorealistic rendering, sp oken dialog, music and sound, driven by and providing se nsing data to th
L1000FWD: fireworks visualization of drug-induced transcriptomic signatures
Motivation As part of the NIH Library of Integrated Network-based Cellular Signatures program, hundreds of thousands of transcriptomic signatures were generated with the L1000 technology, profiling the response of human cell lines to over 20 000 small molecule compounds. This effort is a promising approach toward revealing the mechanisms-of-action (MOA) for marketed drugs and other less studied potential therapeutic compounds. Results L1000 fireworks display (L1000FWD) is a web application that provides interactive visualization of over 16 000 drug and small-molecule induced gene expression signatures. L1000FWD enables coloring of signatures by different attributes such as cell type, time point, concentration, as well as drug attributes such as MOA and clinical phase. Signature similarity search is implemented to enable the search for mimicking or opposing signatures given as input of up and down gene sets. Each point on the L1000FWD interactive map is linked to a signature landing page, which provides multifaceted knowledge from various sources about the signature and the drug. Notably such information includes most frequent diagnoses, co-prescribed drugs and age distribution of prescriptions as extracted from the Mount Sinai Health System electronic medical records. Overall, L1000FWD serves as a platform for identifying functions for novel small molecules using unsupervised clustering, as well as for exploring drug MOA. Availability and implementation L1000FWD is freely accessible at: http://amp.pharm.mssm.edu/L1000FWD. Supplementary information Supplementary data are available at Bioinformatics online.
GRAVITY DRIVEN SHALLOW WATER MODELS FOR ARBITRARY TOPOGRAPHY ∗
We derive new models for gravity driven shallow water flows in several space dimensions over a general topography. A first model is valid for small slope variation, i.e. small curvature, and a second model is valid for arbitrary topography. In both cases no particular assumption is made on the velocity profile in the material layer. The models are written for an arbitrary coordinate system, and several formulations are provided. A Coulomb friction term is derived within the same framework, relevant in particular for debris avalanches. All our models are invariant under rotation, admit a conservative energy equation, and preserve the steady state of a lake at rest.
Duality for differential games and optimal control
A class of linear differential games is shown to be equivalent to some optimalcontrol problems; the body of available concepts and results for the latter, involving time-optimality, uniqueness of optimal controls, feedback controls, can thus be brought to bear on the former. In this paper, the emphasis is on the reduction rather than on the consequences. In particular, we can include rather general forms of exterior state constraints, even though the interesting results now known treat control systems constrained to coset spaces [1], [5]. The trend of this paper is the exact opposite of Isaacs' natural interpretation of differential games as extension of the optimal control problem. A further implication is that the games treated are valueless, in that problems of value (existence and its value) do not intrude themselves.
Non-audit fees , disclosure and audit quality
This paper investigates the effect of non-audit services on audit quality. Following the announcement of the requirement to disclose non-audit fees, approximately one-third of UK quoted companies disclosed before the requirement became effective. Whilst distressed companies were more likely to disclose early, auditor size, directors’ shareholdings and non-audit fees were not signiŽ cantly correlated with early disclosure. These results cast doubt on the view that voluntary disclosure of non-audit fees was used to signal audit quality. The evidence also indicates a positive weakly signiŽ cant relationship between disclosed non-audit fees and audit qualiŽ cations. This suggests that when non-audit fees are disclosed, the provision of non-audit services does not reduce audit quality.
Lambda-calculus models of programming languages.
Two aspect of programming languages, recursive definitions and type declarations are analyzed in detail. Church's %-calculus is used as a model of a programming language for purposes of the analysis. The main result on recursion is an analogue to Kleene's first recursion theorem: If A = FA for any %-expressions A and F, then A is an extension of YF in the sense that if E[YF], any expression containing YF, has a normal form then E[YF] = E[A]. Y is Curry's paradoxical combinator. The result is shown to be invariant for many different versions of Y. A system of types and type declarations is developed for the %-calculus and its semantic assumptions are identified. The system is shown to be adequate in the sense that it permits a preprocessor to check formulae prior to evaluation to prevent type errors. It is shown that any formula with a valid assignment of types to all its subexpressions must have a normal form. Thesis Supervisor: John M. Wozencraft Title: Professor of Electrical Engineering
The radial temperature profile of the solar wind: SOLAR WIND RADIAL TEMPERATURE PROFILE
[1] The Voyager data show a decrease in temperature in the inner heliosphere, an increase in temperature from 30– 50 AU, a decrease from 50–63 AU, followed by another increase from 63–68 AU. Models of pickup proton heating predict a monotonic temperature rise beyond about 30 AU but do not account for the smaller scale (few AU) temperature variations. At 1 AU, the solar wind temperature is a strong function of the solar wind speed. We find that incorporating a temperature dependence on speed into the pickup proton heating results can reproduce much of the smaller-scale temperature variation observed out to 68 AU. The same speed-temperature dependence provides good fits to data from both the outer heliosphere and from near Earth. Since a large fraction of the proton energy results from heating, this work implies that the heating rate is a function of the speed.
A study of big data processing constraints on a low-power Hadoop cluster
Big Data processing with Hadoop has been emerging recently, both on the computing cloud and enterprise deployment. However, wide-spread security exploits may hurt the reputation of public clouds. If Hadoop on the cloud is not an option, an organization has to build its own Hadoop clusters. But having a data center is not worth for a small organization both in terms of building and operating costs. Another viable solution is to build a cluster with low-cost ARM system-on-chip boards. This paper presents a study of a Hadoop cluster for processing Big Data built atop 22 ARM boards. The Hadoop's MapReduce was replaced by Spark and experiments on three different hardware configurations were conducted to understand limitations and constraints of the cluster. From the experimental results, it can be concluded that processing Big Data on an ARM cluster is highly feasible. The cluster could process a 34 GB Wikipedia article file in acceptable time, while generally consumed the power 0.061-0.322 kWh for all benchmarks. It has been found that I/O of the hardware is fast enough, but the power of CPUs is inadequate because they are largely spent for the Hadoop's I/O.
Review of vision-based steel surface inspection systems
Steel is the material of choice for a large number and very diverse industrial applications. Surface qualities along with other properties are the most important quality parameters, particularly for flat-rolled steel products. Traditional manual surface inspection procedures are awfully inadequate to ensure guaranteed quality-free surface. To ensure stringent requirements of customers, automated vision-based steel surface inspection techniques have been found to be very effective and popular during the last two decades. Considering its importance, this paper attempts to make the first formal review of state-of-art of vision-based defect detection and classification of steel surfaces as they are produced from steel mills. It is observed that majority of research work has been undertaken for cold steel strip surfaces which is most sensitive to customers' requirements. Work on surface defect detection of hot strips and bars/rods has also shown signs of increase during the last 10 years. The review covers overall aspects of automatic steel surface defect detection and classification systems using vision-based techniques. Attentions have also been drawn to reported success rates along with issues related to real-time operational aspects.
Web-based statistical fact checking of textual documents
User generated content has been growing tremendously in recent years. This content reflects the interests and the diversity of online users. In turn, the diversity among internet users is also reflected in the quality of the content being published online. This increases the need to develop means to gauge the support available for content posted online. In this work, we aim to make use of the web-content to calculate a statistical support score for textual documents. In the proposed algorithm, phrases representing key facts are extracted to construct basic elements of the document. Search is used thereon to validate the support available for these elements online, leading to assigning an overall score for each document. Experimental results have shown a difference between the score distribution of factual news data and false facts data. This indicates that the approach seems to be a promising seed for distinguishing different articles based on the content.
Perceptions and Practices regarding Breastfeeding among Postnatal Women at a District Tertiary Referral Government Hospital in Southern India
Background. Breastfeeding is the optimal method for achieving a normal growth and development of the baby. This study aimed to study mothers' perceptions and practices regarding breastfeeding in Mangalore, India. Methodology. A cross-sectional study of 188 mothers was conducted using a structured proforma. Results. Importance of breast feeding was known to most mothers. While initiation of breast feeding within one hour of birth was done by majority of mothers, few had discarded colostrum and adopted prelacteal feeding. Mothers opined that breast feeding is healthy for their babies (96.3%) and easier than infant feeding (79.8%), does not affect marital relationship (51%), and decreases family expenditure (61.1%). However, there were poor perceptions regarding the advantages of breast milk with respect to nutritive value, immune effect, and disease protection. Few respondents reported discontinuation of breastfeeding in previous child if the baby had fever/cold (6%) or diarrhea (18%) and vomiting (26%). There was a statistically significant association between mother's educational level and perceived importance of breastfeeding and also between the mode of delivery and initiation of breast feeding (p < 0.05). Conclusion. Importance of breast feeding was known to most mothers. Few perceptions related to breast milk and feeding along with myths and disbeliefs should be rectified by health education.
Object Detection in Images: Run-Time Complexity and Parameter Selection of Support Vector Machines
In this paper we address two aspects related to the exploitation of Support Vector Machines (SVM) for classification in real application domains, such as the detection of objects in images. The first one concerns the reduction of the run-time complexity of a reference classifier, without increasing its generalization error. In fact we show that the complexity in test phase can be reduced by training SVM classifiers on a new set of features obtained by using Principal Component Analysis (PCA). Moreover, due to the small number of features involved, we explicitly map the new input space in the feature space induced by the adopted kernel function. Since the classifier is simply a hyperplane in the feature space, then the classification of a new pattern involves only the computation of a dot product between the normal to the hyperplane and the pattern. The second issue concerns the problem of parameter selection. In particular we show that the Receiver Operating Characteristic (ROC) curves, measured on a suitable validation set, are effective for selecting, among the classifiers the machine implements, the one having performances similar to the reference classifier. We address these two issues for the particular application of detecting goals during a football match. 1. Object detection and classification The problem of object detection in images is the problem of detecting three dimensional objects in the scene by using the image projected by the object on the sensing plane of a standard camera. Face detection is one of the most interesting applications of object detection in images [10]. An other interesting application, which is getting particular attention from referee associations, sport press and supporters [9, 7, 5], concerns the problem of detecting goals during a football match by using images acquired by a standard TV camera. For an appropriate position of the camera with respect to the football ground, the problem of goal detection can be reduced to the problem of detecting the ball in images of the goalmouth [2]. The problem of object detection can be seen as a learning from examples problem in which the examples are particular views of the object we are interested to detect. In particular, object detection can be seen as a classification problem, because our ultimate goal is to determine a separating surface, optimal under certain conditions, which is able to separate object views from image patterns that are not instances of the object. So in this perspective the data to classify are image patches represented by vectors which, in general, live in spaces with a very high number of dimensions, for example equal to the number of pixels in the patch. Moreover the detection of objects in images requires an exhaustive search of the current image, that is all the patches with a given dimension has to be classified in the image. The size of the data to classify and the need of scanning exhaustively the image for looking for the candidate object show that the problem of object detection in images is a time consuming task and so some strategy for reducing the complexity of the classifier has to be taken into account for handling real contexts.
Helping Users Bootstrap Ontologies: An Empirical Investigation
An ontology is a machine processable artifact that captures knowledge about some domain of interest. Ontologies are used in various domains including healthcare, science, and commerce. In this paper we examine the ontology bootstrapping problem. Specifically, we look at an approach that uses both competency questions and knowledge source reuse via recommendations to address the "cold start problem" that is, the task of creating an ontology from scratch. We describe this approach, an implementation of it, and we present an evaluation in the form of a controlled user study. We find that the approach leads users into creating significantly more detailed initial ontologies that have a greater domain coverage than ontologies produced without this support. Furthermore, in spite of a more involved workflow, the usability and user satisfaction of the bootstrapping approach is as good as a state-of-the-art ontology editor with no additional support.
Training a text classifier with a single word using Twitter Lists and domain adaptation
Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.
Tofacitinib or adalimumab versus placebo in rheumatoid arthritis.
BACKGROUND Tofacitinib (CP-690,550) is a novel oral Janus kinase inhibitor that is being investigated for the treatment of rheumatoid arthritis. METHODS In this 12-month, phase 3 trial, 717 patients who were receiving stable doses of methotrexate were randomly assigned to 5 mg of tofacitinib twice daily, 10 mg of tofacitinib twice daily, 40 mg of adalimumab once every 2 weeks, or placebo. At month 3, patients in the placebo group who did not have a 20% reduction from baseline in the number of swollen and tender joints were switched in a blinded fashion to either 5 mg or 10 mg of tofacitinib twice daily; at month 6, all patients still receiving placebo were switched to tofacitinib in a blinded fashion. The three primary outcome measures were a 20% improvement at month 6 in the American College of Rheumatology scale (ACR 20); the change from baseline to month 3 in the score on the Health Assessment Questionnaire-Disability Index (HAQ-DI) (which ranges from 0 to 3, with higher scores indicating greater disability); and the percentage of patients at month 6 who had a Disease Activity Score for 28-joint counts based on the erythrocyte sedimentation rate (DAS28-4[ESR]) of less than 2.6 (with scores ranging from 0 to 9.4 and higher scores indicating greater disease activity). RESULTS At month 6, ACR 20 response rates were higher among patients receiving 5 mg or 10 mg of tofacitinib (51.5% and 52.6%, respectively) and among those receiving adalimumab (47.2%) than among those receiving placebo (28.3%) (P<0.001 for all comparisons). There were also greater reductions in the HAQ-DI score at month 3 and higher percentages of patients with a DAS28-4(ESR) below 2.6 at month 6 in the active-treatment groups than in the placebo group. Adverse events occurred more frequently with tofacitinib than with placebo, and pulmonary tuberculosis developed in two patients in the 10-mg tofacitinib group. Tofacitinib was associated with an increase in both low-density and high-density lipoprotein cholesterol levels and with reductions in neutrophil counts. CONCLUSIONS In patients with rheumatoid arthritis receiving background methotrexate, tofacitinib was significantly superior to placebo and was numerically similar to adalimumab in efficacy. (Funded by Pfizer; ORAL Standard ClinicalTrials.gov number, NCT00853385.).
The reliability and validity of the Face, Legs, Activity, Cry, Consolability observational tool as a measure of pain in children with cognitive impairment.
UNLABELLED Pain assessment remains difficult in children with cognitive impairment (CI). In this study, we evaluated the validity and reliability of the Face, Legs, Activity, Cry, Consolability (FLACC) tool for assessing pain in children with CI. Each child's developmental level and ability to self-report pain were evaluated. The child's nurse observed and scored pain with the FLACC tool before and after analgesic administration. Simultaneously, parents scored pain with a visual analog scale, and scores were obtained from children who were able to self-report pain. Observations were videotaped and later viewed by nurses blinded to analgesics and pain scores. One-hundred-forty observations were recorded from 79 children. FLACC scores correlated with parent scores (P < 0.001) and decreased after analgesics (P = 0.001), suggesting good validity. Correlations of total scores (r = 0.5-0.8; P < 0.001) and of each category (r = 0.3-0.8; P < 0.001), as well as measures of exact agreement (kappa = 0.2-0.65), suggest good reliability. Test-retest reliability was supported by excellent correlations (r = 0.8-0.883; P < 0.001) and categorical agreement (r = 0.617-0.935; kappa = 0.400-0.881; P < 0.001). These data suggest that the FLACC tool may be useful as an objective measure of postoperative pain in children with CI. IMPLICATIONS The FLACC pain assessment tool may facilitate reliable and valid observational pain assessment in children with cognitive impairment who cannot self-report their pain. Objective pain assessment is important to facilitate effective postoperative pain management in these vulnerable children.