title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Can color changes alter the neural correlates of recognition memory? Manipulation of processing affects an electrophysiological indicator of conceptual implicit memory. | It has been widely shown that recognition memory includes two distinct retrieval processes: familiarity and recollection. Many studies have shown that recognition memory can be facilitated when there is a perceptual match between the studied and the tested items. Most event-related potential studies have explored the perceptual match effect on familiarity on the basis of the hypothesis that the specific event-related potential component associated with familiarity is the FN400 (300-500 ms mid-frontal effect). However, it is currently unclear whether the FN400 indexes familiarity or conceptual implicit memory. In addition, on the basis of the findings of a previous study, the so-called perceptual manipulations in previous studies may also involve some conceptual alterations. Therefore, we sought to determine the influence of perceptual manipulation by color changes on recognition memory when the perceptual or the conceptual processes were emphasized. Specifically, different instructions (perceptually or conceptually oriented) were provided to the participants. The results showed that color changes may significantly affect overall recognition memory behaviorally and that congruent items were recognized with a higher accuracy rate than incongruent items in both tasks, but no corresponding neural changes were found. Despite the evident familiarity shown in the two tasks (the behavioral performance of recognition memory was much higher than at the chance level), the FN400 effect was found in conceptually oriented tasks, but not perceptually oriented tasks. It is thus highly interesting that the FN400 effect was not induced, although color manipulation of recognition memory was behaviorally shown, as seen in previous studies. Our findings of the FN400 effect for the conceptual but not perceptual condition support the explanation that the FN400 effect indexes conceptual implicit memory. |
A Component-Based Simplex Architecture for High-Assurance Cyber-Physical Systems | We present Component-Based Simplex Architecture (CBSA), a new framework for assuring the runtime safety of component-based cyber-physical systems (CPSs). CBSA integrates Assume-Guarantee (A-G) reasoning with the core principles of the Simplex control architecture to allow component-based CPSs to run advanced, uncertified controllers while still providing runtime assurance that A-G contracts and global properties are satisfied. In CBSA, multiple Simplex instances, which can be composed in a nested, serial or parallel manner, coordinate to assure system-wide properties. Combining A-G reasoning and the Simplex architecture is a challenging problem that yields significant benefits. By utilizing A-G contracts, we are able to compositionally determine the switching logic for CBSAs, thereby alleviating the state explosion encountered by other approaches. Another benefit is that we can use A-G proof rules to decompose the proof of system-wide safety assurance into sub-proofs corresponding to the component-based structure of the system architecture. We also introduce the notion of coordinated switching between Simplex instances, a key component of our compositional approach to reasoning about CBSA switching logic. We illustrate our framework with a component-based control system for a ground rover. We formally prove that the CBSA for this system guarantees energy safety (the rover never runs out of power), and collision freedom (the rover never collides with a stationary obstacle). We also consider a CBSA for the rover that guarantees mission completion: all target destinations visited within a prescribed amount of time. |
How to Achieve Operational Business-IT Alignment: Insights from a Global Aerospace Firm | A common challenge facing firms is how to effectively embed strategic businessIT alignment into daily routines at the operational level. Based on our findings from following an alignment project in a global aerospace industry leader for almost two years, we put forward a framework, which we call OperA, for establishing operational level business-IT alignment. This framework has three dimensions—knowledge, communication and trust, and identifies alignment paths for three strategic situations faced by firms: major planned changes, regular operations and major unplanned changes. Each path is anchored in a different dimension of the framework. The global aerospace case shows how different mechanisms used for each path improve business processes and enable successful change. The case also revealed frequent pitfalls and dependencies between the dimensions and associated mechanisms that top managers should be aware of as they strive to achieve operational business-IT alignment. |
Word segmentation in monolingual and bilingual infant learners of English and French | Word segmentation skills emerge during infancy, but it is unclear to what extent this ability is shaped by experience listening to a specific language or language type. This issue was explored by comparing segmentation of bi-syllabic words in monolingual and bilingual 7.5-month-old learners of French and English. In a native-language condition, monolingual infants segmented bi-syllabic words with the predominant stress pattern of their native language. Monolingual French infants also segmented in a different dialect of French, whereas both monolingual groups failed in a cross-language test, i.e. English infants failed to segment in French and vice versa. These findings support the hypothesis that word segmentation is shaped by infant sensitivity to the rhythmic structure of their native language. Our finding that bilingual infants segment bi-syllabic words in two native languages at the same age as their monolingual peers shows that dual language exposure does not delay the emergence of this skill. |
Postoperative neutrophil-to-lymphocyte ratio plus platelet-to-lymphocyte ratio predicts the outcomes of hepatocellular carcinoma. | BACKGROUND
There is limited information regarding NLR-PLR (the combination of the neutrophil-to-lymphocyte ratio [NLR] and platelet-to-lymphocyte ratio [PLR]) in hepatocellular carcinoma (HCC). This study aimed to assess the predictive ability of NLR-PLR in patients with resectable hepatitis B virus-related HCC within Milan criteria after hepatectomy.
METHODS
Two hundred thirty-six consecutive HCC patients were included in the study. The postoperative NLR-PLR was calculated based on the data obtained on the first postoperative month after liver resection as follows: patients with both an elevated PLR and an elevated NLR, which were detected by receiver operating characteristic curve analysis, were allocated a score of 2, and patients showing one or neither of these elevations were allocated a score of 1 or 0, respectively.
RESULTS
During the follow-up period, 113 patients experienced recurrence and 41 patients died. Multivariate analyses suggested that tumor-node-metastasis stage, preoperative alpha-fetal protein, and postoperative NLR-PLR were independently associated with recurrence, whereas microvascular invasion and postoperative NLR-PLR adversely impacted the overall survival. The 5-y recurrence-free and overall survival rates of the patients with a postoperative NLR-PLR of 0, 1, or 2 were 43.6%, 35.6%, or 8.3% (P < 0.001) and 82.1%, 73.0%, or 10.5% (P < 0.001), respectively.
CONCLUSIONS
The postoperative NLR-PLR predicted outcomes of hepatitis B virus-related HCC patients within Milan criteria after liver resection. |
Social cognition: thinking categorically about others. | In attempting to make sense of other people, perceivers regularly construct and use categorical representations to simplify and streamline the person perception process. Noting the importance of categorical thinking in everyday life, our emphasis in this chapter is on the cognitive dynamics of categorical social perception. In reviewing current research on this topic, three specific issues are addressed: (a) When are social categories activated by perceivers, (b) what are the typical consequences of category activation, and (c) can perceivers control the influence and expression of categorical thinking? Throughout the chapter, we consider how integrative models of cognitive functioning may inform our understanding of categorical social perception. |
Depth of Presence in Virtual Environments | This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects. |
A comparative study of traditional and newly proposed features for recognition of speech under stress | It is well known that the performance of speech recognition algorithms degrade in the presence of adverse environments where a speaker is under stress, emotion, or Lombard effect. This study evaluates the effectiveness of traditional features in recognition of speech under stress and formulates new features which are shown to improve stressed speech recognition. The focus is on formulating robust features which are less dependent on the speaking conditions rather than applying compensation or adaptation techniques. The stressed speaking styles considered are simulated angry and loud, Lombard effect speech, and noisy actual stressed speech from the SUSAS database which is available on CD-ROM through the NATO IST/TG-01 research group and LDC1 . In addition, this study investigates the immunity of linear prediction power spectrum and fast Fourier transform power spectrum to the presence of stress. Our results show that unlike fast Fourier transform’s (FFT) immunity to noise, the linear prediction power spectrum is more immune than FFT to stress as well as to a combination of a noisy and stressful environment. Finally, the effect of various parameter processing such as fixed versus variable preemphasis, liftering, and fixed versus cepstral mean normalization are studied. Two alternative frequency partitioning methods are proposed and compared with traditional mel-frequency cepstral coefficients (MFCC) features for stressed speech recognition. It is shown that the alternate filterbank frequency partitions are more effective for recognition of speech under both simulated and actual stressed conditions. |
A comparative pH-dissolution profile study of selected commercial levothyroxine products using inductively coupled plasma mass spectrometry. | Levothyroxine (T4) is a narrow therapeutic index drug with classic bioequivalence problem between various available products. Dissolution of a drug is a crucial step in its oral absorption and bioavailability. The dissolution of T4 from three commercial solid oral dosage forms: Synthroid (SYN), generic levothyroxine sodium by Sandoz Inc. (GEN) and Tirosint (TIR) was studied using a sensitive ICP-MS assay. All the three products showed variable and pH-dependent dissolution behaviors. The absence of surfactant from the dissolution media decreased the percent T4 dissolved for all the three products by 26-95% (at 30 min). SYN dissolution showed the most pH dependency, whereas GEN and TIR showed the fastest and highest dissolution, respectively. TIR was the most consistent one, and was minimally affected by pH and/or by the presence of surfactant. Furthermore, dissolution of T4 decreased considerably with increase in the pH, which suggests a possible physical interaction in patients concurrently on T4 and gastric pH altering drugs, such as proton pump inhibitors. Variable dissolution of T4 products can, therefore, impact the oral absorption and bioavailability of T4 and may result in bioequivalence problems between various available products. |
Interventions for cough in cancer. | BACKGROUND
Cough is a common symptom in patients with malignancies, especially in patients with lung cancer. Cough is not well controlled in clinical practice and clinicians have few management options to treat it.
OBJECTIVES
The primary objective of this review was to determine the effectiveness of interventions, both pharmacological and non-pharmacological, (other than chemotherapy and external beam radiotherapy) in the management of cough in malignant disease (especially in lung cancer).
SEARCH STRATEGY
Databases searched included: The Cochrane Central Register of Controlled Trials (CENTRAL) and the Database of Abstracts of Reviews of Effectiveness (DARE) (The Cochrane Library issue 4, 2009); MEDLINE (1966 to May 2010); EMBASE (1980 to May 2010); CINAHL (1980 to May 2010); PSYCHINFO (1980 to May 2010); AMED (1985 to May 2010); SIGLE (1980 to May 2010); British Nursing Index (1985 to May 2010); CancerLit (1975 to May 2010). We searched for cough suppressants, antitussives and other drugs with antitussive activity as well as non-pharmacological interventions (see Appendices 1-4 for search terms).
SELECTION CRITERIA
We selected randomised controlled trials (RCTs) and clinical trials (quasi-experimental trials, and trials where there is a comparison group but no mention of randomisation) in participants with primary or metastatic lung cancer or other cancers.
DATA COLLECTION AND ANALYSIS
Two review authors independently assessed titles and abstracts of all studies, and extracted data from all selected studies before reaching consensus. A third review author arbitrated with any disagreement. Meta-analysis was not attempted due to the heterogeneity of studies.
MAIN RESULTS
Seventeen studies met inclusion criteria and examined either brachytherapy, laser or photodynamic therapy (eight studies) or a variety of pharmacological therapies (nine studies). Overall, there was absence of credible evidence and the majority of studies were of low methodological quality and high risk of bias. Brachytherapy seemed to improve cough in a variety of doses in selected participants, suggesting that possibly the lowest effective dose should be used to minimise side effects. Photodynamic therapy was examined in one study, and while improvements in cough were observed, its role over other therapies for cough is unclear. Some indication of effect was observed with morphine, codeine, dihydrocodeine, levodropropizine, sodium cromoglycate and butamirate citrate linctus (cough syrup), although all studies had significant risk of bias.
AUTHORS' CONCLUSIONS
No practice recommendations could be drawn from this review. There is an urgent need to increase the number and quality of studies evaluating the effects of interventions in the management of cough in cancer. |
On Measure Theoretic definitions of Generalized Information Measures and Maximum Entropy Prescriptions | X dP dμ ln dP dμ dμ on a measure space (X, M, μ), does not qualify itself as an information measure (it is not a natural extension of the discrete case), maximum entropy (ME) prescriptions in the measure-theoretic case are consistent with that of discrete case. In this paper, we study the measure-theoretic definitions of generalized information measures and discuss the ME prescriptions. We present two results in this regard: (i) we prove that, as in the case of classical relative-entropy, the measuretheoretic definitions of generalized relative-entropies, Rényi and Tsallis, are natural extensions of their respective discrete cases, (ii) we show that, ME prescriptions of measure-theoretic Tsallis entropy are consistent with the discrete case. |
Latent Tree Learning with Differentiable Parsers: Shift-Reduce Parsing and Chart Parsing | Latent tree learning models represent sentences by composing their words according to an induced parse tree, all based on a downstream task. These models often outperform baselines which use (externally provided) syntax trees to drive the composition order. This work contributes (a) a new latent tree learning model based on shift-reduce parsing, with competitive downstream performance and non-trivial induced trees, and (b) an analysis of the trees learned by our shift-reduce model and by a chart-based model. |
Development and validation of a cancer awareness questionnaire for Malaysian undergraduate students of Chinese ethnicity. | OBJECTIVES
To describe the development and validation of a cancer awareness questionnaire (CAQ) based on a literature review of previous studies, focusing on cancer awareness and prevention.
MATERIALS AND METHODS
A total of 388 Chinese undergraduate students in a private university in Kuala Lumpur, Malaysia, were recruited to evaluate the developed self-administered questionnaire. The CAQ consisted of four sections: awareness of cancer warning signs and screening tests; knowledge of cancer risk factors; barriers in seeking medical advice; and attitudes towards cancer and cancer prevention. The questionnaire was evaluated for construct validity using principal component analysis and internal consistency using Cronbach's alpha (α) coefficient. Test-retest reliability was assessed with a 10-14 days interval and measured using Pearson product-moment correlation.
RESULTS
The initial 77-item CAQ was reduced to 63 items, with satisfactory construct validity, and a high total internal consistency (Cronbach's α=0.77). A total of 143 students completed the questionnaire for the test-retest reliability obtaining a correlation of 0.72 (p<0.001) overall.
CONCLUSIONS
The CAQ could provide a reliable and valid measure that can be used to assess cancer awareness among local Chinese undergraduate students. However, further studies among students from different backgrounds (e.g. ethnicity) are required in order to facilitate the use of the cancer awareness questionnaire among all university students. |
The Evolving Tree—A Novel Self-Organizing Network for Data Analysis | The Self-Organizing Map (SOM) is one of the best known and most popular neural network-based data analysis tools. Many variants of the SOM have been proposed, like the Neural Gas by Martinetz and Schulten, the Growing Cell Structures by Fritzke, and the Tree-Structured SOM by Koikkalainen and Oja. The purpose of such variants is either to make a more flexible topology, suitable for complex data analysis problems or to reduce the computational requirements of the SOM, especially the time-consuming search for the best-matching unit in large maps. We propose here a new variant called the Evolving Tree which tries to combine both of these advantages. The nodes are arranged in a tree topology that is allowed to grow when any given branch receives a lot of hits from the training vectors. The search for the best matching unit and its neighbors is conducted along the tree and is therefore very efficient. A comparison experiment with high dimensional real world data shows that the performance of the proposed method is better than some classical variants of SOM. |
Achieving concentrated graphene dispersions in water/acetone mixtures by the strategy of tailoring Hansen solubility parameters | Although exfoliating graphite to give graphene paves a new way for graphene preparation, a general strategy of low-boiling-point solvents and high graphene concentration is still highly required. In this study, using the strategy of tailoring Hansen solubility parameters (HSP), a method based on exfoliation of graphite in water/acetone mixtures is demonstrated to achieve concentrated graphene dispersions. It is found that in the scope of blending two mediocre solvents, tailoring the HSP of water/acetone mixtures to approach the HSP of graphene could yield graphene dispersions at a high concentration of up to 0.21 mg ml−1. The experimentally determined optimum composition of the mixtures occurs at an acetone mass fraction of ∼75%. The trend of concentration varying with mixture compositions could be well predicated by the model, which relates the concentration to the mixing enthalpy within the scope of HSP theory. The resultant dispersion is highly stabilized. Atomic force microscopic statistical analysis shows that up to ∼50% of the prepared nanosheets are less than 1 nm thick after 4 h sonication and 114g centrifugation. Analyses based on diverse characterizations indicate the graphene sheets to be largely free of basal plane defects and oxidation. The filtered films are also investigated in terms of their electrical and optical properties to show reasonable conductivity and transparency. The strategy of tailoring HSP, which can be easily extended to various solvent systems, and water/acetone mixtures here, extends the scope for large-scale production of graphene in low-boiling-point solutions. (Some figures may appear in colour only in the online journal) |
Model compression via distillation and quantization | Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger networks, called “teachers,” into compressed “student” networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher network, into the training of a smaller student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to state-of-the-art full-precision teacher models, while providing up to order of magnitude compression, and inference speedup that is almost linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices. |
Collaborative Deep Reinforcement Learning for Multi-object Tracking | In this paper, we propose a collaborative deep reinforcement learning (C-DRL) method for multi-object tracking. Most existing multiobject tracking methods employ the tracking-by-detection strategy which first detects objects in each frame and then associates them across different frames. However, the performance of these methods rely heavily on the detection results, which are usually unsatisfied in many real applications, especially in crowded scenes. To address this, we develop a deep prediction-decision network in our C-DRL, which simultaneously detects and predicts objects under a unified network via deep reinforcement learning. Specifically, we consider each object as an agent and track it via the prediction network, and seek the optimal tracked results by exploiting the collaborative interactions of different agents and environments via the decision network.Experimental results on the challenging MOT15 and MOT16 benchmarks are presented to show the effectiveness of our approach. |
Efficient AUV navigation fusing acoustic ranging and side-scan sonar | This paper presents an on-line nonlinear least squares algorithm for multi-sensor autonomous underwater vehicle (AUV) navigation. The approach integrates the global constraints of range to and GPS position of a surface vehicle or buoy communicated via acoustic modems and relative pose constraints arising from targets detected in side-scan sonar images. The approach utilizes an efficient optimization algorithm, iSAM, which allows for consistent on-line estimation of the entire set of trajectory constraints. The optimized trajectory can then be used to more accurately navigate the AUV, to extend mission duration, and to avoid GPS surfacing. As iSAM provides efficient access to the marginal covariances of previously observed features, automatic data association is greatly simplified — particularly in sparse marine environments. A key feature of our approach is its intended scalability to single surface sensor (a vehicle or buoy) broadcasting its GPS position and simultaneous one-way travel time range (OWTT) to multiple AUVs. We discuss why our approach is scalable as well as robust to modem transmission failure. Results are provided for an ocean experiment using a Hydroid REMUS 100 AUV co-operating with one of two craft: an autonomous surface vehicle (ASV) and a manned support vessel. During these experiments the ranging portion of the algorithm ran online on-board the AUV. Extension of the paradigm to multiple missions via the optimization of successive survey missions (and the resultant sonar mosaics) is also demonstrated. |
Deep Supervised Hashing for Multi-Label and Large-Scale Image Retrieval | One of the most challenging tasks in large-scale multi-label image retrieval is to map images into binary codes while preserving multilevel semantic similarity. Recently, several deep supervised hashing methods have been proposed to learn hash functions that preserve multilevel semantic similarity with deep convolutional neural networks. However, these triplet label based methods try to preserve the ranking order of images according to their similarity degrees to the queries while not putting direct constraints on the distance between the codes of very similar images. Besides, the current evaluation criteria are not able to measure the performance of existing hashing methods on preserving fine-grained multilevel semantic similarity. To tackle these issues, we propose a novel Deep Multilevel Semantic Similarity Preserving Hashing (DMSSPH) method to learn compact similarity-preserving binary codes for the huge body of multi-label image data with deep convolutional neural networks. In our approach, we make the best of the supervised information in the form of pairwise labels to maximize the discriminability of output binary codes. Extensive evaluations conducted on several benchmark datasets demonstrate that the proposed method significantly outperforms the state-of-the-art supervised and unsupervised hashing methods at the accuracies of top returned images, especially for shorter binary codes. Meanwhile, the proposed method shows better performance on preserving fine-grained multilevel semantic similarity according to the results under the Jaccard coefficient based evaluation criteria we propose. |
Treating fibromyalgia with mindfulness-based stress reduction: Results from a 3-armed randomized controlled trial | Mindfulness-based stress reduction (MBSR) is a structured 8-week group program teaching mindfulness meditation and mindful yoga exercises. MBSR aims to help participants develop nonjudgmental awareness of moment-to-moment experience. Fibromyalgia is a clinical syndrome with chronic pain, fatigue, and insomnia as major symptoms. Efficacy of MBSR for enhanced well-being of fibromyalgia patients was investigated in a 3-armed trial, which was a follow-up to an earlier quasi-randomized investigation. A total of 177 female patients were randomized to one of the following: (1) MBSR, (2) an active control procedure controlling for nonspecific effects of MBSR, or (3) a wait list. The major outcome was health-related quality of life (HRQoL) 2 months post-treatment. Secondary outcomes were disorder-specific quality of life, depression, pain, anxiety, somatic complaints, and a proposed index of mindfulness. Of the patients, 82% completed the study. There were no significant differences between groups on primary outcome, but patients overall improved in HRQoL at short-term follow-up (P=0.004). Post hoc analyses showed that only MBSR manifested a significant pre-to-post-intervention improvement in HRQoL (P=0.02). Furthermore, multivariate analysis of secondary measures indicated modest benefits for MBSR patients. MBSR yielded significant pre-to-post-intervention improvements in 6 of 8 secondary outcome variables, the active control in 3, and the wait list in 2. In conclusion, primary outcome analyses did not support the efficacy of MBSR in fibromyalgia, although patients in the MBSR arm appeared to benefit most. Effect sizes were small compared to the earlier, quasi-randomized investigation. Several methodological aspects are discussed, e.g., patient burden, treatment preference and motivation, that may provide explanations for differences. In a 3-armed randomized controlled trial in female patients suffering from fibromyalgia, patients benefited modestly from a mindfulness-based stress reduction intervention. |
Robust Matrix Elastic Net based Canonical Correlation Analysis: An Effective Algorithm for Multi-View Unsupervised Learning | This paper presents a robust matrix elastic net based canonical correlation analysis (RMEN-CCA) for multiple view unsupervised learning problems, which emphasizes the combination of CCA and the robust matrix elastic net (RMEN) used as coupled feature selection. The RMEN-CCA leverages the strength of the RMEN to distill naturally meaningful features without any prior assumption and to measure effectively correlations between different ’views’. We can further employ directly the kernel trick to extend the RMEN-CCA to the kernel scenario with theoretical guarantees, which takes advantage of the kernel trick for highly complicated nonlinear feature learning. Rather than simply incorporating existing regularization minimization terms into CCA, this paper provides a new learning paradigm for CCA and is the first to derive a coupled feature selection based CCA algorithm that guarantees convergence. More significantly, for CCA, the newly-derived RMEN-CCA bridges the gap between measurement of relevance and coupled feature selection. Moreover, it is nontrivial to tackle directly the RMEN-CCA by previous optimization approaches derived from its sophisticated model architecture. Therefore, this paper further offers a bridge between a new optimization problem and an existing efficient iterative approach. As a consequence, the RMEN-CCA can overcome the limitation of CCA and address large-scale and streaming data problems. Experimental results on four popular competing datasets illustrate that the RMEN-CCA performs more effectively and efficiently than do state-of-the-art approaches. |
Pneumatic muscle actuators within robotic and mechatronic systems | Pneumatic artificial muscles (PAMs) as soft, lightweight and compliant actuators have great potential in applications for the actuations of new types of robots and manipulators. The favourable characteristics of fluidic muscles, such as high power-to-weight ratio and safe interaction with humans are also very suitable during the process of musculoskeletally rehabilitating patients and are often used in making artificial orthoses. This technology, despite the problems of control relatng to nonlinear phenomena, may also have wide future applications within industrial and mechatronic systems. This paper presents several experimental systems actuated by PAMs, which have been designed as test models within the fields of mobile robots, mechatronics, fluid power systems and the feedback control education of mechanical engineering students. This paper first presents the design and construction of a four legged walking robot actuated by pneumatic muscles. The robot has a fully autonomous system with a wireless application platform and can be controlled using a cell phone. Then the paper describes the design and construction of the prototype of an actively-powered ankle foot orthosis. This orthosis device actuated by a single PAM is able to provide the appropriate functions required during the rehabilitations of patients and the loss of mobility. Then the paper focuses on the design and control of a ball and beam system with an antagonistic muscle pair for generating the necessary torque for beam rotation. This mechatronic balancing mechanism falls into the category of unstable, under-actuated, multivariable systems with highly nonlinear dynamics. The final section of the article presents the design and control of a single-joint manipulator arm with pneumatic muscle actuators that enable some new features for the controlled systems. Fluid Power 2015 176 |
Coordination and Geometric Optimization via Distributed Dynamical Systems | Emerging applications for networked and cooperative robots motivate the study of motion coordination for groups of agents. For example, it is envisioned that groups of agents will perform a variety of useful tasks including surveillance, exploration, and environmental monitoring. This paper deals with basic interactions among mobile agents such as “move away from the closest other agent” or “move toward the furthest vertex of your own Voronoi polygon.” These simple interactions amount to distributed dynamical systems because their implementation requires only minimal information about neighboring agents. We characterize the close relationship between these distributed dynamical systems and the disk-covering and sphere-packing cost functions from geometric optimization. Our main results are: (i) we characterize the smoothness properties of these geometric cost functions, (ii) we show that the interaction laws are variations of the nonsmooth gradient of the cost functions, and (iii) we establish various asymptotic convergence properties of the laws. The technical approach relies on concepts from computational geometry, nonsmooth analysis, and nonsmooth stability theory. |
Effects of adenosine on human coronary arterial circulation. | Adenosine is a potent vasodilator used extensively to study the coronary circulation of animals. Its use in humans, however, has been hampered by lack of knowledge about its effects on the human coronary circulation and by concern about its safety. We investigated in humans the effects of adenosine, administered by intracoronary bolus (2-16 micrograms), intracoronary infusion (10-240 micrograms/min), or intravenous infusion (35-140 micrograms/kg/min) on coronary and systemic hemodynamics and the electrocardiogram. Coronary blood flow velocity (CBFV) was measured with a 3F coronary Doppler catheter. The maximal CBFV was determined with intracoronary papaverine (4.5 +/- 0.2.resting CBFV). In normal left coronary arteries (n = 20), 16-micrograms boluses of adenosine caused coronary hyperemia similar to that caused by papaverine (4.6 +/- 0.7.resting CBFV). In the right coronary artery (n = 5), 12-micrograms boluses caused maximal hyperemia (4.4 +/- 1.0.resting CBFV). Intracoronary boluses caused a small, brief decrease in arterial pressure (similar to that caused by papaverine) and no changes in heart rate or in the electrocardiogram. The duration of hyperemia was much shorter after adenosine than after papaverine administration. Intracoronary infusions of 80 micrograms/min or more into the left coronary artery (n = 6) also caused maximal hyperemia (4.4 +/- 0.1.resting CBFV), and doses up to 240 micrograms/min caused a minimal decrease in arterial pressure (-6 +/- 2 mm Hg) and no significant change in heart rate or in electrocardiographic variables. Intravenous infusions in normal patients (n = 25) at 140 micrograms/kg/min caused coronary vasodilation similar to that caused by papaverine in 84% of patients (4.4 +/- 0.9.resting CBFV). At submaximal infusion rates, however, CBFV often fluctuated widely. During the 140-micrograms/kg/min infusion, arterial pressure decreased 6 +/- 7 mm Hg, and heart rate increased 24 +/- 14 beats/min. One patient developed 1 cycle of 2:1 atrioventricular block, but otherwise, the electrocardiogram did not change. In eight patients with microvascular vasodilator dysfunction (delta CBFV, less than 3.5 peak/resting velocity after a maximally vasodilating dose of intracoronary papaverine), the dose-response characteristics to intracoronary boluses and intravenous infusions of adenosine were similar to those found in normal patients.(ABSTRACT TRUNCATED AT 400 WORDS) |
A Survey of Text Classification Algorithms | The problem of classification has been widely studied in the data mining, machine learning, database, and information retrieval communities with applications in a number of diverse domains, such as target marketing, medical diagnosis, news group filtering, and document organization. In this paper we will provide a survey of a wide variety of text classification |
Deriving common malware behavior through graph clustering | Detection of malicious software (malware) continues to be a problem as hackers devise new ways to evade available methods. The proliferation of malware and malware variants requires methods that are both powerful, and fast to execute. This paper proposes a method to derive the common execution behavior of a family of malware instances. For each instance, a graph is constructed that represents kernel objects and their attributes, based on system call traces. The method combines these graphs to develop a supergraph for the family. This supergraph contains a subgraph, called the HotPath, which is observed during the execution of all the malware instances. The proposed method is scalable, identifies previously-unseen malware instances, shows high malware detection rates, and false positive rates close to 0%. |
High-efficiency solution-processed perovskite solar cells with millimeter-scale grains | State-of-the-art photovoltaics use high-purity, large-area, wafer-scale single-crystalline semiconductors grown by sophisticated, high-temperature crystal growth processes. We demonstrate a solution-based hot-casting technique to grow continuous, pinhole-free thin films of organometallic perovskites with millimeter-scale crystalline grains. We fabricated planar solar cells with efficiencies approaching 18%, with little cell-to-cell variability. The devices show hysteresis-free photovoltaic response, which had been a fundamental bottleneck for the stable operation of perovskite devices. Characterization and modeling attribute the improved performance to reduced bulk defects and improved charge carrier mobility in large-grain devices. We anticipate that this technique will lead the field toward synthesis of wafer-scale crystalline perovskites, necessary for the fabrication of high-efficiency solar cells, and will be applicable to several other material systems plagued by polydispersity, defects, and grain boundary recombination in solution-processed thin films. |
Towards noncommutative gravity | In this short article accessible for non-experts I discuss possible ways of constructing a non-commutative gravity paying special attention to possibilities of realizing the full diffeomorphism symmetry and to relations with 2D gravities. |
Chemically Aware Model Builder (camb): an R package for property and bioactivity modelling of small molecules | BACKGROUND
In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process.
RESULTS
camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2).
CONCLUSIONS
Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package. |
QCD Constraints on Form Factor Shapes | This talk presents an introduction to the use of dispersion relations to constrain the shapes of hadronic form factors consistent with QCD. The applications described include methods for studying |V_{cb}| and |V_{ub}|, the strange quark mass, and the pion charge radius. |
The Influences of Advertising Endorser , Brand Image , Brand Equity , Price Promotion , on Purchase Intention-The Mediating Effect of Advertising Endorser | Advertising endorser play a key role on information transmission between manufacturers and consumers. Its purpose is to draw consumers’ attention and interest in order to achieve the object of communication with consumers. This research is mainly in the discussion of the influences of endorser, brand image, brand equity, price promotion on purchase intention, and the results of the study are as follows:(1) brand equity has a significant influence on endorser, (2) brand image has a significant influence on endorser, (3) endorser have a significant influence on purchase intention, (4) price promotion has a significant influence on brand equity, (5) price promotion has a significant influence on purchase intention, (6) advertising endorser mediates the relationship between brand image and purchase intention, and (7) advertising endorser mediates the relationship between brand equity and the purchase intention. |
RemoTouch: A system for remote touch experience | This paper presents some preliminary results on RemoTouch, a system allowing to perform experiences of remote touch. The system consists of an avatar equipped with an instrumented glove and a user wearing tactile displays allowing to feel the remote tactile interaction. The main features of RemoTouch are that it is a wearable system and that a human avatar is used to collect remote tactile interaction data. New paradigms of tactile communication can be designed around the RemoTouch system. Two simple experiences are reported to show the potential of the proposed remote touch architecture. |
Style Transfer Via Texture Synthesis | Style transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image, which is an artistic mixture of the two. Recent work on this problem adopting convolutional neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path toward handling the style transfer task, via the generalization of texture synthesis algorithms. This approach has been proposed over the years, but its results are typically less impressive compared with the CNN ones. In this paper, we propose a novel style transfer algorithm that extends the texture synthesis work of Kwatra et al. (2005), while aiming to get stylized images that are closer in quality to the CNN ones. We modify Kwatra’s algorithm in several key ways in order to achieve the desired transfer, with emphasis on a consistent way for keeping the content intact in selected regions, while producing hallucinated and rich style in others. The results obtained are visually pleasing and diverse, shown to be competitive with the recent CNN style transfer algorithms. The proposed algorithm is fast and flexible, being able to process any pair of content + style images. |
Depolarization of the Client's Moral Position as a Method of Psychological Consultation | In an earlier article we attempted to construct a universal psychotherapeutic theory. On the basis of an analysis of the works of Freud (1923), Carl Rogers (1965), and V. Frankel (1990), we distinguished some universal principles in explaining the mechanisms of formation of neurotic symptoms and formulated, in a more general way, the principal goal of psychotherapy of neuroses (Kapustin, 1993). We also showed that these principles can be used to explain the mechanisms of a broader range of clients' problems in psychological consultation (Kapustin, 1994). We regarded the purpose of a psychological consultation for this range of problems as a somewhat universal one, independent of the content of the problems or of the symptoms accompanying their mental disorders. Further development of our theory will inevitably require the development of a technique of consultation that fits the language and the concepts of the given theory. |
Realizing Autonomous Valet Parking with automotive grade sensors | The availability of several Advanced Driver Assistance Systems has put a correspondingly large number of inexpensive, yet capable sensors on production vehicles. By combining this reality with expertise from the DARPA Grand and Urban Challenges in building autonomous driving platforms, we were able to design and develop an Autonomous Valet Parking (AVP) system on a 2006 Volkwagen Passat Wagon TDI using automotive grade sensors. AVP provides the driver with both convenience and safety benefits - the driver can leave the vehicle at the entrance of a parking garage, allowing the vehicle to navigate the structure, find a spot, and park itself. By leveraging existing software modules from the DARPA Urban Challenge, our efforts focused on developing a parking spot detector, a localization system that did not use GPS, and a back-in parking planner. This paper focuses on describing the design and development of the last two modules. |
A new visual search interface for web browsing | We introduce a new visual search interface for search engines. The interface is a user-friendly and informative graphical front-end for organizing and presenting search results in the form of topic groups. Such a semantics-oriented search result presentation is in contrast with conventional search interfaces which present search results according to the physical structures of the information. Given a user query, our interface first retrieves relevant online materials via a third-party search engine. And then we analyze the semantics of search results to detect latent topics in the result set. Once the topics are detected, we map the search result pages into topic clusters. According to the topic clustering result, we divide the available screen space for our visual interface into multiple topic displaying regions, one for each topic. For each topic's displaying region, we summarize the information contained in the search results under the corresponding topic so that only key messages will be displayed. With this new visual search interface, users are conveyed the key information in the search results expediently. With the key information, users can navigate to the final, desired results with less effort and time than conventional searching. Supplementary materials for this paper are available at http://www.cs.hku.hk/~songhua/visualsearch/. |
Neurodegenerative Diseases Target Large-Scale Human Brain Networks | During development, the healthy human brain constructs a host of large-scale, distributed, function-critical neural networks. Neurodegenerative diseases have been thought to target these systems, but this hypothesis has not been systematically tested in living humans. We used network-sensitive neuroimaging methods to show that five different neurodegenerative syndromes cause circumscribed atrophy within five distinct, healthy, human intrinsic functional connectivity networks. We further discovered a direct link between intrinsic connectivity and gray matter structure. Across healthy individuals, nodes within each functional network exhibited tightly correlated gray matter volumes. The findings suggest that human neural networks can be defined by synchronous baseline activity, a unified corticotrophic fate, and selective vulnerability to neurodegenerative illness. Future studies may clarify how these complex systems are assembled during development and undermined by disease. |
Towards Understanding the Influence of a Virtual Agent ’ s Emotional Expression on Personal Space | The concept of personal space is a key element of social interactions. As such, it is a recurring subject of investigations in the context of research on proxemics. Using virtual-reality-based experiments, we contribute to this area by evaluating the direct effects of emotional expressions of an approaching virtual agent on an individual’s behavioral and physiological responses. As a pilot study focusing on the emotion expressed solely by facial expressions gave promising results, we now present a study design to gain more insight. |
Multistrategy ensemble learning: reducing error by combining ensemble learning techniques | Ensemble learning strategies, especially boosting and bagging decision trees, have demonstrated impressive capacities to improve the prediction accuracy of base learning algorithms. Further gains have been demonstrated by strategies that combine simple ensemble formation approaches. We investigate the hypothesis that the improvement in accuracy of multistrategy approaches to ensemble learning is due to an increase in the diversity of ensemble members that are formed. In addition, guided by this hypothesis, we develop three new multistrategy ensemble learning techniques. Experimental results in a wide variety of natural domains suggest that these multistrategy ensemble learning techniques are, on average, more accurate than their component ensemble learning techniques. |
Efficient Publish/Subscribe Through a Self-Organizing Broker Overlay and its Application to SIENA | Recently many scalable and efficient solutions for event dissemination in publish/subscribe (pub/sub) systems have appeared in the literature. This dissemination is usually done over an overlay network of brokers and its cost can be measured as the number of messages sent over the overlay to allow the event to reach all intended subscribers. Efficient solutions to this problem are often obtained through smart dissemination algorithms that avoid flooding events on the overlay. In this paper we propose a complementary approach, that is obtaining efficient event dissemination by reorganizing the overlay network topology. More specifically, this reorganization is done through a self-organizing algorithm executed by brokers whose aim is to directly connect, through overlay links, pairs of brokers matching same events. In this way, on average, the number of brokers involved in an event dissemination decreases, thus, reducing its cost. Even though the paradigm of the self-organizing algorithm is general and then applicable to any overlay-based pub/sub system, its concrete implementation depends on the specific system. As a consequence we studied the effect of the introduction of the self-organizing algorithm in the context of a specific system implementing a tree-based routing strategy, namely SIENA, showing the actual performance benefits through an extensive simulation study. In particular performance results point out the capacity of the algorithm to converge to an overlay topology accommodating efficient event dissemination with respect to a specific scenario. Moreover, the algorithm shows a significant capacity to adapt the overlay network topology to continuously changing scenarios while keeping an efficient behavior with respect to event dissemination. |
Precision Dairy Farming: Advanced Analysis Solutions for Future Profitability | Precision Dairy Farming is the use of technologies to measure physiological, behavioral, and production indicators on individual animals to improve management strategies and farm performance. Many Precision Dairy Farming technologies, including daily milk yield recording, milk component monitoring, pedometers, automatic temperature recording devices, milk conductivity indicators, automatic estrus detection monitors, and daily body weight measurements, are already being utilized by dairy producers. Other theoretical Precision Dairy Farming technologies have been proposed to measure jaw movements, ruminal pH, reticular contractions, heart rate, animal positioning and activity, vaginal mucus electrical resistance, feeding behavior, lying behavior, odor, glucose, acoustics, progesterone, individual milk components, color (as an indicator of cleanliness), infrared udder surface temperatures, and respiration rates. The main objectives of Precision Dairy Farming are maximizing individual animal potential, early detection of disease, and minimizing the use of medication through preventive health measures. Perceived benefits of Precision Dairy Farming technologies include increased efficiency, reduced costs, improved product quality, minimized adverse environmental impacts, and improved animal health and well-being. Real time data used for monitoring animals may be incorporated into decision support systems designed to facilitate decision making for issues that require compilation of multiple sources of data. Technologies for physiological monitoring of dairy cows have great potential to supplement the observational activities of skilled herdspersons, which is especially critical as more cows are managed by fewer skilled workers. Moreover, data provided by these technologies may be incorporated into genetic evaluations for non-production traits aimed at improving animal health, well-being, and longevity. The economic implications of technology adoption must be explored further to increase adoption rates of Precision Dairy Farming technologies. Precision Dairy Farming may prove to be the next important technological breakthrough for the dairy industry. |
A survey of point-based POMDP solvers | The past decade has seen a significant breakthrough in research on solving partially observable Markov decision processes (POMDPs). Where past solvers could not scale beyond perhaps a dozen states, modern solvers can handle complex domains with many thousands of states. This breakthrough was mainly due to the idea of restricting value function computations to a finite subset of the belief space, permitting only local value updates for this subset. This approach, known as point-based value iteration, avoids the exponential growth of the value function, and is thus applicable for domains with longer horizons, even with relatively large state spaces. Many extensions were suggested to this basic idea, focusing on various aspects of the algorithm—mainly the selection of the belief space subset, and the order of value function updates. In this survey, we walk the reader through the fundamentals of point-based value iteration, explaining the main concepts and ideas. Then, we survey the major extensions to the basic algorithm, discussing their merits. Finally, we include an extensive empirical analysis using well known benchmarks, in order to shed light on the strengths and limitations of the various approaches. |
Reliability and validity of the Evaluation Tool of Children's Handwriting-Cursive (ETCH-C) using the general scoring criteria. | OBJECTIVES
To determine the reliability and aspects of validity of the Evaluation Tool of Children's Handwriting-Cursive (ETCH-C; Amundson, 1995), using the general scoring criteria, when assessing children who use alternative writing scripts.
METHOD
Children in Years 5 and 6 with handwriting problems and a group of matched control participants from their respective classrooms were assessed with the ETCH-C twice, 4 weeks apart.
RESULTS
Total Letter scores were most reliable; more variability should be expected for Total Word scores. Total Numeral scores showed unacceptable reliability levels and are not recommended. We found good discriminant validity for Letter and Word scores and established cutoff scores to distinguish children with and without handwriting dysfunction (Total Letter <90%, Total Word <85%).
CONCLUSION
The ETCH-C, using the general scoring criteria, is a reliable and valid test of handwriting for children using alternative scripts. |
Computerized analysis of shadowing on breast ultrasound for improved lesion detection. | Sonography is being considered for the screening of women at high risk for breast cancer. We are developing computerized detection methods to aid in the localization of lesions on breast ultrasound images. The detection scheme presented here is based on the analysis of posterior acoustic shadowing, since posterior acoustic shadowing is observed for many malignant lesions. The method uses a nonlinear filtering technique based on the skewness of the gray level distribution within a kernel of image data. The database used in this study included 400 breast ultrasound cases (757 images) consisting of complicated cysts, solid benign lesions, and malignant lesions. At a false-positive rate of 0.25 false positives per image, a detection sensitivity of 80% by case (66% by image) was achieved for malignant lesions. The performance for the overall database (at 0.25 false positives per image) was less at 42% sensitivity by case (30% by image) due to the more limited presence of posterior acoustic shadowing for benign solid lesions and the presence of posterior acoustic enhancement for cysts. Our computerized method for the detection of lesion shadows alerts radiologists to lesions that exhibit posterior acoustic shadowing. While this is not a characterization method, its performance is best for lesions that exhibit posterior acoustic shadowing such as malignant and, to a lesser extent, benign solid lesions. This method, in combination with other computerized sonographic detection methods, may ultimately help facilitate the use of ultrasound for breast cancer screening. |
Measuring Enjoyment of Computer Game Play | This paper reports on the development of an instrument designed to measure the enjoyment of computer game play. Despite the enormous technological progress in the field of computer games, enjoyment of computer game play is still not a welldefined construct. Based on Nabi and Krcmar’s (2004) tripartite model of media enjoyment, a survey questionnaire was developed to measure computer game players’ affective, behavioral, and cognitive reactions. Expert consultation, exploratory and confirmatory card sorting sessions were used to refine the instrument. A survey of computer game players was subsequently conducted to test the instrument. Reliabilities and construct validities were analyzed. Findings and their implications were discussed. |
Development of piezoelectric micromachined ultrasonic transducers | Piezoelectric micromachined ultrasonic transducers (pMUTs) are an example of the application of MEMS technology to ultrasound generation and detection, which is expected to offer many advantages over conventional transducers. In this work, we investigate pMUTs through novel design and fabrication methods. A finite element (FE) model, with original tools to measure device performance, was developed to design and optimize pMUTs. A pMUT for the operating range of 2–10 MHz in water and having maximized energy coupling coefficient was modeled, designed, fabricated, and tested for its resonance frequency and coupling coefficient. The model predictions for the resonance frequency were in excellent agreement with the measured values, but not as good for the coupling coefficient due to the variability in the measured coupling coefficient. Compared to conventional ultrasonic transducers, pMUTs exhibit superior bandwidth, in excess of 100%, and offer considerable design flexibility, which allows their operation frequency and acoustic impedance to be tailored for numerous applications. © 2003 Elsevier B.V. All rights reserved. |
Applications of artificial neural networks for ECG signal detection and classification. | The authors have investigated potential applications of artificial neural networks for electrocardiographic QRS detection and beat classification. For the task of QRS detection, the authors used an adaptive multilayer perceptron structure to model the nonlinear background noise so as to enhance the QRS complex. This provided more reliable detection of QRS complexes even in a noisy environment. For electrocardiographic QRS complex pattern classification, an artificial neural network adaptive multilayer perceptron was used as a pattern classifier to distinguish between normal and abnormal beat patterns, as well as to classify 12 different abnormal beat morphologies. Preliminary results using the MIT/BIH (Massachusetts Institute of Technology/Beth Israel Hospital, Cambridge, MA) arrhythmia database are encouraging. |
Exosomal miR-21-5p derived from gastric cancer promotes peritoneal metastasis via mesothelial-to-mesenchymal transition | Peritoneal metastasis is a primary metastatic route for gastric cancers, and the mechanisms underlying this process are still unclear. Peritoneal mesothelial cells (PMCs) undergo mesothelial-to-mesenchymal transition (MMT) to provide a favorable environment for metastatic cancer cells. In this study, we investigated how the exosomal miR-21-5p induces MMT and promotes peritoneal metastasis. Gastric cancer (GC)-derived exosomes were identified by transmission electron microscopy and western blot analysis, then the uptake of exosomes was confirmed by PKH-67 staining. The expression of miR-21-5p and SMAD7 were measured by quantitative real-time polymerase chain reaction (qRT-PCR) and western blot, and the interactions between miR-21-5p and its target genes SMAD7 were confirmed by Luciferase reporter assays. The MMT of PMCs was determined by invasion assays, adhesion assays, immunofluorescent assay, and western blot. Meanwhile, mouse model of tumor peritoneal dissemination model was performed to investigate the role of exosomal miR-21-5p in peritoneal metastasis in vivo. We found that PMCs could internalize GC-derived exosomal miR-21-5p and led to increased levels of miR-21-5p in PMCs. Through various types of in vitro and in vivo assays, we confirmed that exosomal miR-21-5p was able to induce MMT of PMCs and promote tumor peritoneal metastasis. Moreover, our study revealed that this process was promoted by exosomal miR-21-5p through activating TGF-β/Smad pathway via targeting SMAD7. Altogether, our data suggest that exosomal miR-21-5p induces MMT of PMCs and promote cancer peritoneal dissemination by targeting SMAD7. The exosomal miR-21-5p may be a novel therapeutic target for GC peritoneal metastasis. |
Deep 3D face identification | We propose a novel 3D face recognition algorithm using a deep convolutional neural network (DCNN) and a 3D face expression augmentation technique. The performance of 2D face recognition algorithms has significantly increased by leveraging the representational power of deep neural networks and the use of large-scale labeled training data. In this paper, we show that transfer learning from a CNN trained on 2D face images can effectively work for 3D face recognition by fine-tuning the CNN with an extremely small number of 3D facial scans. We also propose a 3D face expression augmentation technique which synthesizes a number of different facial expressions from a single 3D face scan. Our proposed method shows excellent recognition results on Bosphorus, BU-3DFE, and 3D-TEC datasets without using hand-crafted features. The 3D face identification using our deep features also scales well for large databases. |
Single-tuned filter design for harmonic mitigation and optimization with capacitor banks | Ideally, an electricity supply should invariably show a perfectly sinusoidal voltage signal at every customer location. However, for a number of reasons, utilities often find it hard to preserve such desirable conditions. The deviation of the voltage and Current waveforms from sinusoidal is expressed as harmonic distortion. The harmonic distortion in the power system is increasing with wide use of nonlinear loads. Thus, it is important to analyze and evaluate the various harmonic problems in the power system and introduce the appropriate solution techniques. A single-tuned filter design is illustrated and implemented in a model which created in Electromagnetic Transient Analysis Program (ETAP). The system was constructed for this simulation and does not represent one particular real life system Furthermore, the model was used to test the effects of injecting harmonic current by variable speed drive (VFD) into 3 situations. Firstly, the scheme with capacitor banks only, then, using a single-tuned filter and finally both schemes coherently. |
Relating Chronic Pelvic Pain and Endometriosis to Signs of Sensitization and Myofascial Pain and Dysfunction. | Chronic pelvic pain is a frustrating symptom for patients with endometriosis and is frequently refractory to hormonal and surgical management. While these therapies target ectopic endometrial lesions, they do not directly address pain due to central sensitization of the nervous system and myofascial dysfunction, which can continue to generate pain from myofascial trigger points even after traditional treatments are optimized. This article provides a background for understanding how endometriosis facilitates remodeling of neural networks, contributing to sensitization and generation of myofascial trigger points. A framework for evaluating such sensitization and myofascial trigger points in a clinical setting is presented. Treatments that specifically address myofascial pain secondary to spontaneously painful myofascial trigger points and their putative mechanisms of action are also reviewed, including physical therapy, dry needling, anesthetic injections, and botulinum toxin injections. |
Hematological recovery and peripheral blood progenitor cell mobilization after induction chemotherapy and GM-CSF plus G-CSF in breast cancer | In order to determine the effect of GM-CSF plus G-CSF in combination in breast cancer patients receiving an effective induction regimen, we compared hematological recovery and peripheral blood progenitor cell (PBPC) mobilization according to colony-stimulating factor (CSF) support. Forty-three breast cancer patients were treated by TNCF (THP-doxorubicin, vinorelbine, cyclophosphamide, fluorouracil, D1 to D4) with CSF support: 11 patients received GM-CSF (D5 to D14); 16 patients G-CSF (D5 to D14) and 16 patients GM-CSF (D5–D14) plus G-CSF (D10–D14). Between two subsequent cycles, progenitor cells were assessed daily, from D13 to D17. The WBC count was similar for patients receiving G-CSF alone or GM-CSF plus G-CSF, but significantly greater than that of patients receiving GM-CSF alone (P < 0.001). The GM-CSF plus G-CSF combination led to better PBPC mobilization, with significantly different kinetics (P < 0.001) and optimal mean values of CFU-GM, CD34+ cells and cells in cycle, at D15 compared to those obtained with G-CSF or GM-CSF alone. The significantly greater PBPC mobilization obtained with a CSF combination by D15 could be of value for PBPC collection and therapeutic reinjection after high-dose chemotherapies. Bone Marrow Transplantation (2000) 25, 705–710. |
Early effects of raloxifene on clinical vertebral fractures at 12 months in postmenopausal women with osteoporosis. | BACKGROUND
Raloxifene hydrochloride therapy reduces the risk for vertebral fractures at 3 years, but the effects on clinical vertebral fractures in the first year are not known.
METHODS
The Multiple Outcomes of Raloxifene Evaluation (MORE) Trial enrolled 7705 women with osteoporosis, defined by prevalent vertebral fractures and/or a bone mineral density (BMD) T score at or below -2.5, who were treated with placebo or raloxifene at a dosage of 60 or 120 mg/d for 3 years. New clinical vertebral fractures were defined as incident vertebral fractures associated with signs and symptoms suggestive of vertebral fractures, such as back pain, and were diagnosed by means of postbaseline adjudicated spinal radiographs. Scheduled spinal radiographs were obtained at baseline and at 2 and 3 years. In addition, unscheduled spinal radiographs were obtained in women who reported signs or symptoms suggestive of vertebral fracture, and these radiographs subsequently underwent adjudication. If an adjudicated fracture was identified, this was also considered a clinical fracture.
RESULTS
At 1 year, raloxifene, 60 mg/d, decreased the risk for new clinical vertebral fractures by 68% (95% confidence interval [CI], 20%-87%) compared with placebo in the overall study population, and by 66% (95% CI, 23%-89%) in women with prevalent vertebral fractures, who are at greater risk for subsequent fracture. The risk for clinical vertebral fractures in the raloxifene, 60 mg/d, group was decreased by 46% (95% CI, 14%-66%) at 2 years and by 41% (95% CI, 17%-59%) at 3 years. The cumulative incidence of new clinical vertebral fractures was lower in the group receiving raloxifene, 60 mg/d, compared with placebo (P<.001). We found no significant differences in the risk reductions for clinical vertebral fractures between the raloxifene groups at 1, 2, or 3 years.
CONCLUSION
The early risk reduction for new clinical vertebral fractures with 1 year of raloxifene treatment was similar to that reported with other antiresorptive agents. |
Author Name Disambiguation for PubMed | Log analysis shows that PubMed users frequently use author names in queries for retrieving scientific literature. However, author name ambiguity may lead to irrelevant retrieval results. To improve the PubMed user experience with author name queries, we designed an author name disambiguation system consisting of similarity estimation and agglomerative clustering. A machine-learning method was employed to score the features for disambiguating a pair of papers with ambiguous names. These features enable the computation of pairwise similarity scores to estimate the probability of a pair of papers belonging to the same author, which drives an agglomerative clustering algorithm regulated by 2 factors: name compatibility and probability level. With transitivity violation correction, high precision author clustering is achieved by focusing on minimizing false-positive pairing. Disambiguation performance is evaluated with manual verification of random samples of pairs from clustering results. When compared with a state-of-the-art system, our evaluation shows that among all the pairs the lumping error rate drops from 10.1% to 2.2% for our system, while the splitting error rises from 1.8% to 7.7%. This results in an overall error rate of 9.9%, compared with 11.9% for the state-of-the-art method. Other evaluations based on gold standard data also show the increase in accuracy of our clustering. We attribute the performance improvement to the machine-learning method driven by a large-scale training set and the clustering algorithm regulated by a name compatibility scheme preferring precision. With integration of the author name disambiguation system into the PubMed search engine, the overall click-through-rate of PubMed users on author name query results improved from 34.9% to 36.9%. |
Neural activation during response inhibition differentiates blast from mechanical causes of mild to moderate traumatic brain injury. | Military personnel involved in Operations Enduring Freedom and Iraqi Freedom (OEF/OIF) commonly experience blast-induced mild to moderate traumatic brain injury (TBI). In this study, we used task-activated functional MRI (fMRI) to determine if blast-related TBI has a differential impact on brain activation in comparison with TBI caused primarily by mechanical forces in civilian settings. Four groups participated: (1) blast-related military TBI (milTBI; n=21); (2) military controls (milCON; n=22); (3) non-blast civilian TBI (civTBI; n=21); and (4) civilian controls (civCON; n=23) with orthopedic injuries. Mild to moderate TBI (MTBI) occurred 1 to 6 years before enrollment. Participants completed the Stop Signal Task (SST), a measure of inhibitory control, while undergoing fMRI. Brain activation was evaluated with 2 (mil, civ)×2 (TBI, CON) analyses of variance, corrected for multiple comparisons. During correct inhibitions, fMRI activation was lower in the TBI than CON subjects in regions commonly associated with inhibitory control and the default mode network. In contrast, inhibitory failures showed significant interaction effects in the bilateral inferior temporal, left superior temporal, caudate, and cerebellar regions. Specifically, the milTBI group demonstrated more activation than the milCON group when failing to inhibit; in contrast, the civTBI group exhibited less activation than the civCON group. Covariance analyses controlling for the effects of education and self-reported psychological symptoms did not alter the brain activation findings. These results indicate that the chronic effects of TBI are associated with abnormal brain activation during successful response inhibition. During failed inhibition, the pattern of activation distinguished military from civilian TBI, suggesting that blast-related TBI has a unique effect on brain function that can be distinguished from TBI resulting from mechanical forces associated with sports or motor vehicle accidents. The implications of these findings for diagnosis and treatment of TBI are discussed. |
Memory Prefetching Using Adaptive Stream Detection | We present Adaptive Stream Detection, a simple technique for modulating the aggressiveness of a stream prefetcher to match a workload's observed spatial locality. We use this concept to design a prefetcher that resides on an on-chip memory controller. The result is a prefetcher with small hardware costs that can exploit workloads with low amounts of spatial locality. Using highly accurate simulators for the IBM Power5+, we show that this prefetcher improves performance of the SPEC2006fp benchmarks by an average of 32.7% when compared against a Power5+ that performs no prefetching. On a set of 5 commercial benchmarks that have low spatial locality, this prefetcher improves performance by an average of 15.1%. When compared against a typical Power5+ that does perform processor-side prefetching, the average performance improvement of these benchmark suites is 10.2% and 8.4%. We also evaluate the power and energy impact of our technique. For the same benchmark suites, DRAM power consumption increases by less than 3%, while energy usage decreases by 9.8% and 8.2%, respectively. Moreover, the power consumption of the prefetcher itself is low; it is estimated to increase the power consumption of the Power5+ chip by 0.06%. |
Guidelines are advantageous, though not essential for improved survival among breast cancer patients | The purpose of this retrospective multicenter study was to resolve the pseudo-paradox that the clinical outcome of women affected by breast cancer has improved during the last 20 years irrespective of whether they were treated in accordance with clinical guidelines or not. This retrospective German multicenter study included 9061 patients with primary breast cancer recruited from 1991 to 2009. We formed subgroups for the time intervals 1991–2000 (TI1) and 2001–2009 (TI2). In these subgroups, the risk of recurrence (RFS) and overall survival (OS) were compared between patients whose treatment was either 100 % guideline-conforming or, respectively, non-guideline-conforming. The clinical outcome of all patients significantly improved in TI2 compared to TI1 [RFS: p < 0.001, HR = 0.57, 95 % CI (0.49–0.67); OS: p < 0.001, HR = 0.76, 95 % (CI 0.66–0.87)]. OS and RFS of guideline non-adherent patients also improved in TI2 compared to TI. Comparing risk profiles, determined by Nottingham Prognostic Score reveals a significant (p = 0.001) enhancement in the time cohort TI2. Furthermore, the percentage of guideline-conforming systemic therapy (endocrine therapy and chemotherapy) significantly increased (p < 0.001) in the time cohort TI2 to TI for the non-adherent group. The general improvement of clinical outcome of patients during the last 20 years is also valid in the subgroup of women who received treatments, which deviated from the guidelines. The shift in risk profiles as well as medical advances are major reasons for this improvement. Nevertheless, patients with 100 % guideline-conforming therapy always had a better outcome compared to patients with guideline non-adherent therapy. |
Lifted Probabilistic Inference with Counting Formulas | Lifted inference algorithms exploit repeated structure in probabilistic models to answer queries efficiently. Previous work such as de Salvo Braz et al.’s first-order variable elimination (FOVE) has focused on the sharing of potentials across interchangeable random variables. In this paper, we also exploit interchangeability within individual potentials by introducing counting formulas, which indicate how many of the random variables in a set have each possible value. We present a new lifted inference algorithm, C-FOVE, that not only handles counting formulas in its input, but also creates counting formulas for use in intermediate potentials. C-FOVE can be described succinctly in terms of six operators, along with heuristics for when to apply them. Because counting formulas capture dependencies among large numbers of variables compactly, C-FOVE achieves asymptotic speed improvements compared to FOVE. |
Gaussian random number generators | Rapid generation of high quality Gaussian random numbers is a key capability for simulations across a wide range of disciplines. Advances in computing have brought the power to conduct simulations with very large numbers of random numbers and with it, the challenge of meeting increasingly stringent requirements on the quality of Gaussian random number generators (GRNG). This article describes the algorithms underlying various GRNGs, compares their computational requirements, and examines the quality of the random numbers with emphasis on the behaviour in the tail region of the Gaussian probability density function. |
The effect of a pre- and postoperative orthogeriatric service on cognitive function in patients with hip fracture: randomized controlled trial (Oslo Orthogeriatric Trial) | BACKGROUND
Delirium is a common complication in patients with hip fractures and is associated with an increased risk of subsequent dementia. The aim of this trial was to evaluate the effect of a pre- and postoperative orthogeriatric service on the prevention of delirium and longer-term cognitive decline.
METHODS
This was a single-center, prospective, randomized controlled trial in which patients with hip fracture were randomized to treatment in an acute geriatric ward or standard orthopedic ward. Inclusion and randomization took place in the Emergency Department at Oslo University hospital. The key intervention in the acute geriatric ward was Comprehensive Geriatric Assessment including daily interdisciplinary meetings. Primary outcome was cognitive function four months after surgery measured using a composite outcome incorporating the Clinical Dementia Rating Scale (CDR) and the 10 words learning and recalls tasks from the Consortium to Establish a Registry for Alzheimer's Disease battery (CERAD). Secondary outcomes were pre- and postoperative delirium, delirium severity and duration, mortality and mobility (measured by the Short Physical Performance Battery (SPPB)). Patients were assessed four and twelve months after surgery by evaluators blind to allocation.
RESULTS
A total of 329 patients were included. There was no significant difference in cognitive function four months after surgery between patients treated in the acute geriatric and the orthopedic wards (mean 54.7 versus 52.9, 95% confidence interval for the difference -5.9 to 9.5; P = 0.65). There was also no significant difference in delirium rates (49% versus 53%, P = 0.51) or four month mortality (17% versus 15%, P = 0.50) between the intervention and the control group. In a pre-planned sub-group analysis, participants living in their own home at baseline who were randomized to orthogeriatric care had better mobility four months after surgery compared with patients randomized to the orthopedic ward, measured with SPPB (median 6 versus 4, 95% confidence interval for the median difference 0 to 2; P = 0.04).
CONCLUSIONS
Pre- and postoperative orthogeriatric care given in an acute geriatric ward was not effective in reducing delirium or long-term cognitive impairment in patients with hip fracture. The intervention had, however, a positive effect on mobility in patients not admitted from nursing homes.
TRIAL REGISTRATION
ClinicalTrials.gov NCT01009268 Registered November 5, 2009. |
Silver sulfadiazine loaded chitosan/chondroitin sulfate films for a potential wound dressing application. | Silver sulfadiazine (AgSD) loaded chitosan/chondroitin sulfate (CHI/CS) films were formed to be applied as a potential wound dressing material. The liquid uptake capacity of both, CHI/CS and CHI/CS/AgSD, films exhibited a pH-dependent behavior. Tensile tests showed that the amount of CS used to form the films and the further incorporation of AgSD affect the mechanical properties of the films. In vitro AgSD-release assays showed that the CHI/CS mass ratio influences the AgSD release rate. All the investigated CHI/CS/AgSD films sustain the AgSD release up to 96h at physiological pH. Antibacterial activity and cell viability assays showed that all the CHI/CS/AgSD films have activity against Pseudomonas aeruginosa and Staphylococcus aureus but they were not toxic to Vero cells. The results presented in this work indicate that the CHI/CS/AgSD exhibits potential to be applied as a wound dressing material. |
Semantic representation of scientific literature: bringing claims, contributions and named entities onto the Linked Open Data cloud | Title Semantic representation of scientific literature: bringing claims, contributions and named entities onto the Linked Open Data cloud Publication Type Journal Article Year of Publication 2015 Authors Sateli, B. [13], and R. Witte [14] Refereed Designation Refereed Editors Sumner, T. [15] Journal PeerJ Computer Science Volume 1 Issue e37 Date Published 12/2015 |
Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions | Neural networks rely on convolutions to aggregate spatial information. However, spatial convolutions are expensive in terms of model size and computation, both of which grow quadratically with respect to kernel size. In this paper, we present a parameter-free, FLOP-free "shift" operation as an alternative to spatial convolutions. We fuse shifts and point-wise convolutions to construct end-to-end trainable shift-based modules, with a hyperparameter characterizing the tradeoff between accuracy and efficiency. To demonstrate the operation's efficacy, we replace ResNet's 3x3 convolutions with shift-based modules for improved CIFAR10 and CIFAR100 accuracy using 60% fewer parameters; we additionally demonstrate the operation's resilience to parameter reduction on ImageNet, outperforming ResNet family members. We finally show the shift operation's applicability across domains, achieving strong performance with fewer parameters on image classification, face verification and style transfer. |
Creating Constructivist Learning Environments on the Web: The Challenge in Higher Education | Australian universities have traditionally relied on government funding to support undergraduate teaching. As the government has adopted the ‘user-pays’ principle, universities have been forced to look outside their traditional market to expand the undergraduate, post-graduate and international offerings. Alternate delivery methods in many universities have utilised web-based instruction as a basis for this move because of three perceptions: access by the target market is reasonably significant, it is a cost-effective method of delivery, and it provides global access. Since the mid sixties, the trend for both on-campus teaching and teaching at a distance has been to use behaviourist instructional strategies for subject development, which rely on the development of a set of instructional sequences with predetermined outcomes. These models, whilst applicable in a behaviourist environment, are not serving instructional designers well when the theoretical foundation for the subject outcomes is based on a constructivist approach to learning, since the constructivist group of theories places less emphasis on the sequence of instruction and more emphasis on the design of the learning environment. (Jonassen, 1994. p 35). In a web-based environment this proves to be even more challenging. This paper will review current research in design goals for web-based constructivist learning environments, and a move towards the development of models. The design of two web-based subjects will be explored in the context of the design goals developed by Duffy and Cunningham (1996 p 177) who have produced some basic assumptions that they call “metaphors we teach by”. The author seeks to examine the seven goals for their relevance to the instructional designer through the examination of their relevance to the web-based subjects, both of which were framed in constructivist theory. |
Characteristics associated with lower activity involvement in long-term care residents with dementia. | This article describes the characteristics associated with activity involvement in 400 residents with dementia in 45 assisted living facilities and nursing homes. Activity involvement was related to family involvement in care and staff encouragement, after adjusting for resident age, gender, race, cognitive and functional status, and comorbidity. |
Learning to detect natural image boundaries using local brightness, color, and texture cues | The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images. |
Chip and Skim: Cloning EMV Cards with the Pre-play Attack | EMV, also known as "Chip and PIN", is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered two serious problems: a widespread implementation flaw and a deeper, more difficult to fix flaw with the EMV protocol itself. The first flaw is that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this nonce. This exposes them to a "pre-play" attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically. Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. The second problem was exposed by the above work. Independent of the random number quality, there is a protocol failure: the actual random number generated by the terminal can simply be replaced by one the attacker used earlier when capturing an authentication code from the card. This variant of the pre-play attack may be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design and implementation mistakes that enabled these flaws to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, and monitoring customer complaints. Finally we discuss countermeasures. More than a year after our initial responsible disclosure of these flaws to the banks, action has only been taken to mitigate the first of them, while we have seen a likely case of the second in the wild, and the spread of ATM and POS malware is making it ever more of a threat. |
A randomized clinical trial of eye movement desensitization and reprocessing (EMDR), fluoxetine, and pill placebo in the treatment of posttraumatic stress disorder: treatment effects and long-term maintenance. | OBJECTIVE
The relative short-term efficacy and long-term benefits of pharmacologic versus psychotherapeutic interventions have not been studied for posttraumatic stress disorder (PTSD). This study compared the efficacy of a selective serotonin reup-take inhibitor (SSRI), fluoxetine, with a psychotherapeutic treatment, eye movement desensitization and reprocessing (EMDR), and pill placebo and measured maintenance of treatment gains at 6-month follow-up.
METHOD
Eighty-eight PTSD subjects diagnosed according to DSM-IV criteria were randomly assigned to EMDR, fluoxetine, or pill placebo. They received 8 weeks of treatment and were assessed by blind raters posttreatment and at 6-month follow-up. The primary outcome measure was the Clinician-Administered PTSD Scale, DSM-IV version, and the secondary outcome measure was the Beck Depression Inventory-II. The study ran from July 2000 through July 2003.
RESULTS
The psychotherapy intervention was more successful than pharmacotherapy in achieving sustained reductions in PTSD and depression symptoms, but this benefit accrued primarily for adult-onset trauma survivors. At 6-month follow-up, 75.0% of adult-onset versus 33.3% of child-onset trauma subjects receiving EMDR achieved asymptomatic end-state functioning compared with none in the fluoxetine group. For most childhood-onset trauma patients, neither treatment produced complete symptom remission.
CONCLUSIONS
This study supports the efficacy of brief EMDR treatment to produce substantial and sustained reduction of PTSD and depression in most victims of adult-onset trauma. It suggests a role for SSRIs as a reliable first-line intervention to achieve moderate symptom relief for adult victims of childhood-onset trauma. Future research should assess the impact of lengthier intervention, combination treatments, and treatment sequencing on the resolution of PTSD in adults with childhood-onset trauma. |
A Search for Multiple Equilibria in Urban Industrial Structure | Theories featuring multiple equilibria are now widespread across many fields of economics. Yet little empirical work has asked if such multiple equilibria are salient features of real economies. We examine this in the context of the Allied bombing of Japanese cities and industries in WWII. We develop a new empirical test for multiple equilibria and apply it to data for 114 Japanese cities in eight manufacturing industries. The data provide no support for the existence of multiple equilibria. In the aftermath even of immense shocks, a city typically recovers not only its population and its share of aggregate manufacturing, but even the specific industries it had before. Donald R. Davis Columbia University Department of Economics 1038 Intl. Affairs Building 420 West 118th St. New York, NY 10027 and NBER [email protected] David E. Weinstein Columbia University Department of Economics 420 W. 118th Street MC 3308 New York, NY 10027 and NBER [email protected] A Search for Multiple Equilibria in Urban Industrial Structure I. Multiple Equilibria in Theory and Data The concept of multiple equilibria is a hallmark of modern economics, one whose influence crosses broad swathes of the profession. In macroeconomics, it is offered as an underpinning for the business cycle (Russell Cooper and Andrew John 1988). In development economics it rationalizes a theory of the “big push” (Kevin M. Murphy, Andrei Shleifer, Robert W. Vishny 1988). In urban and regional economics, it provides a foundation for understanding variation in the density of economic activity across cities and regions (Paul R. Krugman 1991). In the field of international economics, it has even been offered as a candidate explanation for the division of the global economy into an industrial North and a non-industrial South, as well as the possible future collapse of such a world regime (Krugman and Anthony J.Venables 1995). The theoretical literature has now firmly established the analytic foundations for the existence of multiple equilibria. However theory has far outpaced empirics. The most important empirical question arising from this intellectual current has almost not been touched: Are multiple equilibria a salient feature of real economies? This is inherently a difficult question. At any moment in time, one observes only the actual equilibrium, not alternative equilibria that exist only potentially. If the researcher observes a change over time, it is difficult to know if this change reflects a shift between equilibria due to temporary shocks or a change in fundamentals 1 A simple indication of the flood of work in these areas is that the Journal of Economic Literature has featured three surveys of segments of this literature in recent years (see Matsuyama 1995, Anas, Arnott and Small 1998, and Neary 2001). Recent major monographs in economic geography include Masahisa Fujita, Paul R. Krugman and Anthony Venables (1998), Fujita and Jacques Thisse (2002), and Richard Baldwin, et al. (2003). 2 Cooper (2002) discusses issues of estimation and identification in the presence of multiple equilibria as well as surveying a selection from the small number of papers that seek to test empirically for multiple equilibria in specific economic contexts. We view these as welcome contributions to understanding a difficult problem, but also believe that are perhaps not yet well understood by the researcher. If a cross section reveals heterogeneity that seems hard to explain by the observed variation in fundamentals, it is hard to know if this may be taken to confirm theories of multiple equilibria or if it suggests only that our empirical identification of fundamentals falls short. Testing for multiple equilibria is also difficult for other reasons. The theory of multiple equilibria relies on the existence of thresholds that separate distinct equilibria. In any real context, it is difficult to identify such thresholds or the location of unobserved equilibria. In addition, a researcher may look for exogenous shocks, but these need to be of sufficient magnitude to shift the economy to the other side of the relevant threshold and they need to be clearly temporary so that we can see that we fail to return to the status quo ante. A researcher is rarely so blessed. Donald R. Davis and David E. Weinstein (2002) initiated work that addresses the practical salience of multiple equilibria in the context of city sizes. The experiment considered was the Allied bombing of Japanese cities during World War II. This disturbance was exogenous, temporary and one of the most powerful shocks to relative city sizes in the history of the world. Hence it is an ideal laboratory for identifying multiple equilibria. That paper examined city population data and, in the context of the present paper, may be viewed as having answered two questions. Do the data reject a null that city population shares have a unique stable equilibrium? Do the data support a stated condition that would be sufficient to establish multiple equilibria in city population shares? In both cases, our answer was “no” – we could not reject a unique stable equilibrium nor could we establish the sufficient condition for multiple equilibria in city population shares. much remains to be done. Andrea Moro (2003) considers multiple equilibria in a statistical discrimination labor model. |
Isoliquiritigenin, a flavonoid from licorice, plays a dual role in regulating gastrointestinal motility in vitro and in vivo. | Licorice root has been used for years to regulate gastrointestinal function in traditional Chinese medicine. This study reveals the gastrointestinal effects of isoliquiritigenin, a flavonoid isolated from the roots of Glycyrrhiza glabra (a kind of Licorice). In vivo, isoliquiritigenin produced a dual dose-related effect on the charcoal meal travel, inhibitory at the low doses, while prokinetic at the high doses. In vitro, isoliquiritigenin showed an atropine-sensitive concentration-dependent spasmogenic effect in isolated rat stomach fundus. However, a spasmolytic effect was observed in isolated rabbit jejunums, guinea pig ileums and atropinized rat stomach fundus, either as noncompetitive inhibition of agonist concentration-response curves, inhibition of high K(+) (80 mM)-induced contractions, or displacement of Ca(2+) concentration-response curves to the right, indicating a calcium antagonist effect. Pretreatment with N(omega)-nitro-L-arginine methyl ester (L-NAME; 30 microM), indomethacin (10 microM), methylene blue (10 microM), tetraethylammonium chloride (0.5 mM), glibenclamide (1 microM), 4-aminopyridine (0.1 mM), or clotrimazole (1 microM) did not inhibit the spasmolytic effect. These results indicate that isoliquiritigenin plays a dual role in regulating gastrointestinal motility, both spasmogenic and spasmolytic. The spasmogenic effect may involve the activating of muscarinic receptors, while the spasmolytic effect is predominantly due to blockade of the calcium channels. |
Smartphone gaming and frequent use pattern associated with smartphone addiction | The aim of this study was to investigate the risk factors of smartphone addiction in high school students.A total of 880 adolescents were recruited from a vocational high school in Taiwan in January 2014 to complete a set of questionnaires, including the 10-item Smartphone Addiction Inventory, Chen Internet Addiction Scale, and a survey of content and patterns of personal smartphone use. Of those recruited, 689 students (646 male) aged 14 to 21 and who owned a smartphone completed the questionnaire. Multiple linear regression models were used to determine the variables associated with smartphone addiction.Smartphone gaming and frequent smartphone use were associated with smartphone addiction. Furthermore, both the smartphone gaming-predominant and gaming with multiple-applications groups showed a similar association with smartphone addiction. Gender, duration of owning a smartphone, and substance use were not associated with smartphone addiction.Our findings suggest that smartphone use patterns should be part of specific measures to prevent and intervene in cases of excessive smartphone use. |
Learning to Compute Word Embeddings On the Fly | Words in natural language follow a Zipfian distribution whereby some words are frequent but most are rare. Learning representations for words in the “long tail” of this distribution requires enormous amounts of data. Representations of rare words trained directly on end-tasks are usually poor, requiring us to pre-train embeddings on external data, or treat all rare words as out-of-vocabulary words with a unique representation. We provide a method for predicting embeddings of rare words on the fly from small amounts of auxiliary data with a network trained against the end task. We show that this improves results against baselines where embeddings are trained on the end task in a reading comprehension task, a recognizing textual entailment task, and in language modelling. |
Conceptualization and Measurement of Team Workload: A Critical Need | OBJECTIVE
The purpose of this article is to present and expand on current theories and measurement techniques for assessing team workload.
BACKGROUND
To date, little research has been conducted on the workload experienced by teams. A validated theory describing team workload, which includes an account of its relation to individual workload, has not been articulated.
METHOD
The authors review several theoretical approaches to team workload.Within the team research literature, attempts to evaluate team workload have typically relied on measures of individual workload. This assumes that such measures retain their validity at the team level of measurement, but empirical research suggests that this method may lack sensitivity to the drivers of team workload.
RESULTS
On the basis of these reviews, the authors advance suggestions concerning a comprehensive theory of team workload and methods for assessing it in team settings. The approaches reviewed include subjective, performance, physiological, and strategy shift measures. Theoretical and statistical difficulties associated with aggregating individual-level workload responses to a team-level measure are discussed.
CONCLUSION
Conception and measurement of team workload have not significantly matured alongside developments in individual workload.
APPLICATION
Team workload remains a complex research area without simple measurement solutions, but as a research domain it remains open for contributions from interested and enterprising researchers. |
Wikidata Vandalism Detection - The Loganberry Vandalism Detector at WSDM Cup 2017 | Wikidata is the new, large-scale knowledge base of the Wikimedia Foundation. As it can be edited by anyone, entries frequently get vandalized, leading to the possibility that it might spread of falsified information if such posts are not detected. The WSDM 2017 Wiki Vandalism Detection Challenge requires us to solve this problem by computing a vandalism score denoting the likelihood that a revision corresponds to an act of vandalism and performance is measured using the ROC-AUC obtained on a held-out test set. This paper provides the details of our submission that obtained an ROC-AUC score of 0.91976 in the final evaluation. |
A gene expression signature for high-risk multiple myeloma | There is a strong need to better predict the survival of patients with newly diagnosed multiple myeloma (MM). As gene expression profiles (GEPs) reflect the biology of MM in individual patients, we built a prognostic signature based on GEPs. GEPs obtained from newly diagnosed MM patients included in the HOVON65/GMMG-HD4 trial (n=290) were used as training data. Using this set, a prognostic signature of 92 genes (EMC-92-gene signature) was generated by supervised principal component analysis combined with simulated annealing. Performance of the EMC-92-gene signature was confirmed in independent validation sets of newly diagnosed (total therapy (TT)2, n=351; TT3, n=142; MRC-IX, n=247) and relapsed patients (APEX, n=264). In all the sets, patients defined as high-risk by the EMC-92-gene signature show a clearly reduced overall survival (OS) with a hazard ratio (HR) of 3.40 (95% confidence interval (CI): 2.19–5.29) for the TT2 study, 5.23 (95% CI: 2.46–11.13) for the TT3 study, 2.38 (95% CI: 1.65–3.43) for the MRC-IX study and 3.01 (95% CI: 2.06–4.39) for the APEX study (P<0.0001 in all studies). In multivariate analyses this signature was proven to be independent of the currently used prognostic factors. The EMC-92-gene signature is better or comparable to previously published signatures. This signature contributes to risk assessment in clinical trials and could provide a tool for treatment choices in high-risk MM patients. |
Comment to the paper "The energy conservation law for electromagnetic field in application to problems of radiation of moving particles " | In the paper [B.M.Bolotovskii, S.N.Stoliarov, Uspekhi Fizicheskich Nauk, v.162, No 3, p.195, 1992] the energy conservation law was applied to a problem of radiation of a charged particle in an external electromagnetic field. The authors consecutively and mathematically strictly solved the problem but received wrong result. They derived an expression which includes a change of the energies of the electromagnetic fields accompanying the moving particle corresponding to the initial and final velocity of the particle. The energy of the field accompanying the particle is the energy of the particle of the electromagnetic origin. It should not enter the solution of the problem. The origin of the mistake made by the authors is discussed in this comment. |
Changes in Food Choices of Participants in the Special Diabetes Program for Indians–Diabetes Prevention Demonstration Project, 2006–2010 | INTRODUCTION
American Indians/Alaska Natives (AI/ANs) have a disproportionately high rate of type 2 diabetes. Changing food choices plays a key role in preventing diabetes. This study documented changes in the food choices of AI/ANs with diagnosed prediabetes who participated in a diabetes prevention program.
METHODS
The Special Diabetes Program for Indians-Diabetes Prevention Demonstration Project implemented the evidence-based Diabetes Prevention Program (DPP) lifestyle intervention in 36 health care programs nationwide, engaging 80 AI/AN communities. At baseline, at 30 days post-curriculum, and at the first annual assessment, participants completed a sociodemographic survey and 27-item food frequency questionnaire and underwent a medical examination assessing fasting blood glucose (FBG), blood pressure, body mass index (BMI), low-density lipoprotein [LDL], high-density lipoprotein [HDL], and triglycerides. Multiple linear regressions were used to assess the relationship between temporal changes in food choice and other diabetes risk factors.
RESULTS
From January 2006 to July 2010, baseline, post-curriculum, and first annual assessments were completed by 3,135 (100%), 2,046 (65%), and 1,480 (47%) participants, respectively. An increase in healthy food choices was associated initially with reduced bodyweight, BMI, FBG, and LDL and increased physical activity. At first annual assessment, the associations persisted between healthy food choices and bodyweight, BMI, and physical activity.
CONCLUSION
AI/AN adults from various tribal and urban communities participating in this preventive intervention made sustained changes in food choices and had reductions in diabetes risk factors. The outcomes demonstrate the feasibility and effectiveness of translating the DPP lifestyle intervention to community-based settings. |
Mild Steel Corrosion Inhibition by Various Plant Extracts in 0.5 M Sulphuric acid | Extract of various plants (Wrightiatinctoria, Clerodendrumphlomidis, Ipomoeatriloba) leaves was investigated as corrosion inhibitor of mild steel in 0.5M H2SO4 using conventional weight loss, electrochemical polarization, electrochemical impedance spectroscopy and scanning electron microscopic studies. The weight loss results showed that all the plant extracts are excellent corrosion inhibitors, electrochemical polarization data revealed the mixed mode of inhibition and the results of electrochemical impedance spectroscopy have shown that the change in the impedance parameters, charge transfer resistance and double layer capacitance, with the change in concentration of the extract is due to the adsorption of active molecules leading to the formation of a protective layer on the surface of mild steel. Scanning electron microscopic studies provided the confirmatory evidence of improved surface condition, due to the adsorption, for the corrosion protection. |
Reconstruction of the high urogenital sinus: early perineal prone approach without division of the rectum. | PURPOSE
Reconstruction of the vagina and external genitalia in the infant is quite challenging, particularly when a urogenital sinus is associated with high confluence of the vagina and urethra. Many surgeons believe that children with such a malformation should undergo staged or delayed reconstruction, so that vaginoplasty is done when the child is older and larger. Vaginoplasty early in life is thought to be difficult due to patient size and poor visualization. The posterior sagittal approach has been beneficial for acquiring exposure to high urogenital sinus anomalies but it has been thought to require splitting of the rectum and temporary colostomy. We report a modification of this technique.
MATERIALS AND METHODS
In the last 5 years all patients with urogenital sinus anomalies underwent reconstruction using a single stage approach regardless of the level of confluence. In 8 patients with a high level of confluence reconstruction was performed using a perineal prone approach. Exposure was achieved without division of the rectum. The operative technique is presented in detail.
RESULTS
This midline perineal prone approach has allowed excellent exposure of the high vagina even in infants. In all 8 patients reconstruction was done without difficulty and no patient required incision of the rectum or colostomy. This procedure did not preclude the use of a posteriorly based flap for vaginal reconstruction.
CONCLUSIONS
While patients with low confluence can be treated with single posteriorly based flap vaginoplasty, those with higher confluence may benefit from a perineal prone approach to achieve adequate exposure for pull-through vaginoplasty. This prone approach to the high urogenital sinus anomaly can be performed without division of the rectum, provides excellent exposure of the high confluence even in small children and does not preclude the use of posterior flaps for vaginal reconstruction. |
Effectiveness of non-pharmaceutical measures in preventing pediatric influenza: a case–control study | BACKGROUND
Hygiene behavior plays a relevant role in infectious disease transmission. The aim of this study was to evaluate non-pharmaceutical interventions (NPI) in preventing pediatric influenza infections.
METHODS
Laboratory confirmed influenza cases occurred during 2009-10 and 2010-11 seasons matched by age and date of consultation. NPI (frequency of hand washing, alcohol-based hand sanitizer use and hand washing after touching contaminated surfaces) during seven days prior to onset of symptoms were obtained from parents of cases and controls.
RESULTS
Cases presented higher prevalence of underlying conditions such as pneumonia [OR = 3.23; 95% CI: 1.38-7.58 p = 0.007], asthma [OR = 2.45; 95% CI: 1.17-5.14 p = 0.02] and having more than 1 risk factor [OR = 1.67; 95% CI: 0.99-2.82 p = 0.05]. Hand washing more than 5 times per day [aOR = 0.62; 95% CI: 0.39-0.99 p = 0.04] was the only statistically significant protective factor. When considering two age groups (pre-school age 0-4 yrs and school age 5-17) yrs , only the school age group showed a negative association for influenza infection for both washing more than 5 times per day [aOR = 0.47; 95% CI: 0.22-0.99 p = 0.04] and hand washing after touching contaminated surfaces [aOR = 0.19; 95% CI: 0.04-0.86 p = 0.03].
CONCLUSION
Frequent hand washing should be recommended to prevent influenza infection in the community setting and in special in the school age group. |
Ankle-Foot-Orthosis Control in Inclinations and Stairs | A control procedure is proposed for an ankle-foot-orthosis (AFO) for different gait situations, such as inclinations and stairs. This paper presents a novel AFO control of the ankle angle. A magneto-rheological damper was used to achieve ankle damping during foot down and locking at swing, thereby avoiding foot slap as well as foot drop. The controller used feedback from the ankle angle only. Still it was capable of not only adjusting damping within a gait step but also changing control behavior depending on level walking, ascending and descending stairs. As a consequence, toe strike was possible in stair gait as opposed to heel strike in level walking. Tests verified the expected behavior in stair gait and in level walking where gait speed and ground inclinations varied. The self-adjusted AFO is believed to improve gait comfort in slopes and stairs. |
Gay/Lesbian/Bisexual/Transgender Public Policy Issues. A Citizen's and Administrator's Guide to the New Cultural Struggle. | In response to the cultural war declared upon gay, lesbian, bisexual, and transgendered people by the radical right, this book gives you an inside look at the gay community's perspectives on the four major issues of school curricula, workplace protections, legitimization of same-sex relationships, and protections against discrimination. As Gay/Lesbian/Bisexual/Transgender Public Policy Issues examines the role of radical religious thought in American politics, it looks at the development of conflicts on the national level over the four main issues. It also looks at specific examples of the conflicts, such as the "Rainbow Curriculum" battle in New York City and the controversial school board elections in Des Moines.This insightful and comprehensive book discusses recent successes of the gay and lesbian movement in the development of local public policy as well as setbacks it has experienced. From election year gay-bashing to the persecution of gay and lesbian youth in school environments to lack of protection for gay and lesbian employees in the workplace, it is clear that much work remains to be done before the rights of gay, lesbian, bisexual, and transgendered persons are protected to the same degree as those of heterosexual persons. Gay or straight, you will find your eyes truly opened as you read about: discrimination against gay and lesbian people by the American system of justice congressional indifference to workplace oppression the origins and aims of the religious right the development of services to meet the needs of gay youths in the school environment the major confrontations administrators face when dealing with sexual orientation issues homelessness from a gay/lesbian/bisexual/transgendered point of view gay and lesbian role models and multicultural activities for adolescentsCitizens, public administrators, policymakers, educational administrators, social service providers, and teachers need to read Gay/Lesbian/Bisexual/Transgender Public Policy Issues to understand how members of the gay community remain disenfranchised in American society. You may think you are sensitive to the issues, but are you helping to find solutions? Get this book and find out! |
A 1 . 5V , 10-bit , 14 . 3-MS / s CMOS Pipeline Analog-to-Digital Converter | A 1.5-V, 10-bit, 14.3-MS/s pipeline analog-to-digital converter was implemented in a 0.6m CMOS technology. Emphasis was placed on observing device reliability constraints at low voltage. MOS switches were implemented without lowthreshold devices by using a bootstrapping technique that does not subject the devices to large terminal voltages. The converter achieved a peak signal-to-noise-and-distortion ratio of 58.5 dB, maximum differential nonlinearity of 0.5 least significant bit (LSB), maximum integral nonlinearity of 0.7 LSB, and a power consumption of 36 mW. |
A signed binary multiplication technique pdf | Bit is added to the left of the partial product using sign extension.products required to half that required by a simple add and shift method. A signed binary multiplication technique. Quarterly Journal of.Booths multiplication algorithm is a multiplication algorithm that multiplies two signed binary numbers in twos complement notation. The above mentioned technique is inadequate when the multiplicand is most negative number that can be represented e.g. Create a book Download as PDF Printable version.Lecture 8: Binary Multiplication Division. Sign-and-magnitude: the most significant bit represents.BINARY MULTIPLICATION. Division Method.per and pencil. This method adds the multiplicand X to itself Y times, where Y de. In the case of binary multiplication, since the digits are 0 and 1, each step of.implemented a Signed-Unsigned Booths Multiplier and a. defined as the multiplication performed on signed binary numbers and. While the second method.accept unsigned binary inputs, one bit at a time, least significant bit first, and produce. Method for the multiplication of signed numbers, based on their earlier.will also describe how to apply this new method to the familiar multipliers such as Booth and. |
Similarity-Based Unsupervised Band Selection for Hyperspectral Image Analysis | Band selection is a common approach to reduce the data dimensionality of hyperspectral imagery. It extracts several bands of importance in some sense by taking advantage of high spectral correlation. Driven by detection or classification accuracy, one would expect that, using a subset of original bands, the accuracy is unchanged or tolerably degraded, whereas computational burden is significantly relaxed. When the desired object information is known, this task can be achieved by finding the bands that contain the most information about these objects. When the desired object information is unknown, i.e., unsupervised band selection, the objective is to select the most distinctive and informative bands. It is expected that these bands can provide an overall satisfactory detection and classification performance. In this letter, we propose unsupervised band selection algorithms based on band similarity measurement. The experimental result shows that our approach can yield a better result in terms of information conservation and class separability than other widely used techniques. |
Text Detection and Localization in Complex Scene Images using Constrained AdaBoost Algorithm | We have proposed a complete system for text detection and localization in gray scale scene images. A boosting framework integrating feature and weak classifier selection based on computational complexity is proposed to construct efficient text detectors. The proposed scheme uses a small set of heterogeneous features which are spatially combined to build a large set of features. A neural network based localizer learns necessary rules for localization. The evaluation is done on the challenging ICDAR 2003 robust reading and text locating database. The results are encouraging and our system can localize text of various font sizes and styles in complex background. |
The hippocampus and visual perception | In this review, we will discuss the idea that the hippocampus may be involved in both memory and perception, contrary to theories that posit functional and neuroanatomical segregation of these processes. This suggestion is based on a number of recent neuropsychological and functional neuroimaging studies that have demonstrated that the hippocampus is involved in the visual discrimination of complex spatial scene stimuli. We argue that these findings cannot be explained by long-term memory or working memory processing or, in the case of patient findings, dysfunction beyond the medial temporal lobe (MTL). Instead, these studies point toward a role for the hippocampus in higher-order spatial perception. We suggest that the hippocampus processes complex conjunctions of spatial features, and that it may be more appropriate to consider the representations for which this structure is critical, rather than the cognitive processes that it mediates. |
The promise and limitations of partnered governance: the case of sustainable palm oil | Purpose – This paper sets out to report on a study of the Roundtable on Sustainable Palm Oil (RSPO) as an instance of ‘‘partnered governance’’ oriented to advance sustainable development in a supply chain. After briefly discussing the conceptualization of partnered governance, its social organizational features, and its drivers, the paper aims to outline the history and structure of RSPO and then to assess the effectiveness, efficiency and level of legitimation of this innovative governance structure. The paper points out several of the limitations as well as potentialities of partner governance arrangements such as that of RSPO. Design/methodology/approach – The paper shows through a focused multi-method case study how the RSPO developed as consumer-oriented businesses partnered with civil society organizations and palm oil producers to address what was seen as a long-term threat to rain forests, on the one hand, and to financial interests, on the other. Findings – In the case of deforestation caused by oil palm expansion, national government intervention was absent and international regulation could not be mobilized. While the RSPO’s system of partnered governance may have many shortcomings, the paper stresses that there are few real alternatives that have been as successful in addressing this type of sustainability issue. A major structural problem with such partnerships for sustainability is that their emergence and development typically depend on powerful players. Originality/value – The originality/value of the paper lies in its identification of several of the strengths and weaknesses of partnered governance based on a focused case study, and suggests ways in which partnered governance can be developed and optimized in addressing sustainability issues. |
The romantic imagination | A critical guide to the poets of the romantic period, this collection of lectures reassesses the literary value of Blake, Coleridge, Wordsworth, Shelley, Keats, Byron, Poe, Christina and Dante Gabriel Rossetti, and Swinburne. |
Clustering Semantic Spaces of Suicide Notes and Newsgroups Articles | Historically, suicide risk assessment has relied on question-and-answer type tools. These tools, built on psychometric advances, are widely used because of availability. Yet there is no known tool based on biologic and cognitive evidence. This absence often cause a vexing clinical problem for clinicians who question the value of the result as time passes. The purpose of this paper is to describe one experiment in a series of experiments to develop a tool that combines Biological Markers ( Bm) with Thought Markers ( Tm), and use machine learning to compute a real-time index for assessing the likelihood repeated suicide attempt in the next six-months. For this study we focus using unsupervised machine learning to distinguish between actual suicide notes and newsgroups. This is important because it gives us insight into how well these methods discriminate between real notes and general conversation. |
Systems Biology Analysis of Gene Expression during In Vivo Mycobacterium avium paratuberculosis Enteric Colonization Reveals Role for Immune Tolerance | Survival and persistence of Mycobacterium avium subsp. paratuberculosis (MAP) in the intestinal mucosa is associated with host immune tolerance. However, the initial events during MAP interaction with its host that lead to pathogen survival, granulomatous inflammation, and clinical disease progression are poorly defined. We hypothesize that immune tolerance is initiated upon initial contact of MAP with the intestinal Peyer's patch. To test our hypothesis, ligated ileal loops in neonatal calves were infected with MAP. Intestinal tissue RNAs were collected (0.5, 1, 2, 4, 8 and 12 hrs post-infection), processed, and hybridized to bovine gene expression microarrays. By comparing the gene transcription responses of calves infected with the MAP, informative complex patterns of expression were clearly visible. To interpret these complex data, changes in the gene expression were further analyzed by dynamic Bayesian analysis, and genes were grouped into the specific pathways and gene ontology categories to create a holistic model. This model revealed three different phases of responses: i) early (30 min and 1 hr post-infection), ii) intermediate (2, 4 and 8 hrs post-infection), and iii) late (12 hrs post-infection). We describe here the data that include expression profiles for perturbed pathways, as well as, mechanistic genes (genes predicted to have regulatory influence) that are associated with immune tolerance. In the Early Phase of MAP infection, multiple pathways were initiated in response to MAP invasion via receptor mediated endocytosis and changes in intestinal permeability. During the Intermediate Phase, perturbed pathways involved the inflammatory responses, cytokine-cytokine receptor interaction, and cell-cell signaling. During the Late Phase of infection, gene responses associated with immune tolerance were initiated at the level of T-cell signaling. Our study provides evidence that MAP infection resulted in differentially regulated genes, perturbed pathways and specifically modified mechanistic genes contributing to the colonization of Peyer's patch. |
Analyzing deception, evolvability, and behavioral rarity in evolutionary robotics | A common aim across evolutionary search is to skillfully navigate complex search spaces, which requires search algorithms that exploit search space structure. This paper focuses on evolutionary robotics (ER) in particular, wherein controllers for robots are evolved to produce complex behavior. One productive approach for probing search space structure is to analyze properties of fitness landscapes; however, this paper argues that ER may require a fresh perspective for landscape analysis, because ER often goes beyond the black-box setting, i.e. evaluations provide useful information about how robots behave, beyond scalar performance heuristics. Indeed, some ER algorithms explicitly exploit such behavioral information, e.g. to follow gradients of behavioral novelty rather than to climb gradients of increasing performance. Thus well-motivated behavior-aware metrics may aid probing search-space structure in ER. In particular, this paper argues that behavioral conceptions of deception, evolvability, and rarity may help to understand ER landscapes, and seeks to quantify and explore them within a common ER benchmark task. To help this investigation, an expressive but limited encoding is designed, such that the behavior of all possible individuals in the domain can be precomputed. The result is an efficient platform for experimentation that facilitates (1) probing exact quantifications of deception, evolvability, and rarity in the chosen domain, and (2) the ability to efficiently drive search through idealistic ground-truth measures. The results help develop intuitions and suggest possible new ER algorithms. The hope is that the extensible open-source framework enables quick experimentation and idea generation, aiding brainstorming of new search algorithms and measures. |
Prognostic Indicators of Persistent Post-Concussive Symptoms after Deployment-Related Mild Traumatic Brain Injury: A Prospective Longitudinal Study in U.S. Army Soldiers. | Mild traumatic brain injury (mTBI), or concussion, is prevalent in the military. The course of recovery can be highly variable. This study investigates whether deployment-acquired mTBI is associated with subsequent presence and severity of post-concussive symptoms (PCS) and identifies predictors of persistent PCS among US Army personnel who sustained mTBI while deployed to Afghanistan. We used data from a prospective longitudinal survey of soldiers assessed 1-2 months before a 10-month deployment to Afghanistan (T0), on redeployment to the United States (T1), approximately 3 months later (T2), and approximately 9 months later (T3). Outcomes of interest were PCS at T2 and T3. Predictors considered were: sociodemographic factors, number of previous deployments, pre-deployment mental health and TBI history, and mTBI and other military-related stress during the index deployment. The study sample comprised 4518 soldiers, 822 (18.2%) of whom experienced mTBI during the index deployment. After adjusting for demographic, clinical, and deployment-related factors, deployment-acquired mTBI was associated with nearly triple the risk of reporting any PCS and with increased severity of PCS when symptoms were present. Among those who sustained mTBI, severity of PCS at follow-up was associated with history of pre-deployment TBI(s), pre-deployment psychological distress, more severe deployment stress, and loss of consciousness or lapse of memory (versus being "dazed" only) as a result of deployment-acquired mTBI. In summary, we found that sustaining mTBI increases risk for persistent PCS. Previous TBI(s), pre-deployment psychological distress, severe deployment stress, and loss of consciousness or lapse of memory resulting from mTBI(s) are prognostic indicators of persistent PCS after an index mTBI. These observations may have actionable implications for prevention of chronic sequelae of mTBI in the military and other settings. |
Maitland: Lighter-Weight VM Introspection to Support Cyber-security in the Cloud | Despite defensive advances, malicious software (malware) remains an ever present cyber-security threat. Cloud environments are far from malware immune, in that: i) they innately support the execution of remotely supplied code, and ii) escaping their virtual machine (VM) confines has proven relatively easy to achieve in practice. The growing interest in clouds by industries and governments is also creating a core need to be able to formally address cloud security and privacy issues. VM introspection provides one of the core cyber-security tools for analyzing the run-time behaviors of code. Traditionally, introspection approaches have required close integration with the underlying hypervisors and substantial re-engineering when OS updates and patches are applied. Such heavy-weight introspection techniques, therefore, are too invasive to fit well within modern commercial clouds. Instead, lighter-weight introspection techniques are required that provide the same levels of within-VM observability but without the tight hypervisor and OS patch-level integration. This work introduces Maitland as a prototype proof-of-concept implementation a lighter-weight introspection tool, which exploits paravirtualization to meet these end-goals. The work assesses Maitland's performance, highlights its use to perform packer-independent malware detection, and assesses whether, with further optimizations, Maitland could provide a viable approach for introspection in commercial clouds. |
Derivatives of Regular Expressions | Kleene's regular expressions, which can be used for describing sequential circuits, were defined using three operators (union, concatenation and iterate) on sets of sequences. Word descriptions of problems can be more easily put in the regular expression language if the language is enriched by the inclusion of other logical operations. However, il~ the problem of converting the regular expression description to a state diagram, the existing methods either cannot handle expressions with additional operators, or are made quite complicated by the presence of such operators. In this paper the notion of a derivative of a regular expression is introduced atld the properties of derivatives are discussed. This leads, in a very natural way, to the construction of a state diagram from a regular expression containing any number of logical operators. |
Introduction to Artificial Neural Network ( ANN ) Methods : What They Are and How to Use Them | Basic concepts of ANNs together with three most widely used ANN learning strategies (error back-propagation, Kohonen, and counterpropagation) are explained and discussed. In order to show how the explained methods can be applied to chemical problems, one simple example, the classification and the prediction of the origin of different olive oil samples, each represented by eigtht fatty acid concentrations, is worked out in detail. |
The Model of Intellectual Capital Approach on the Human Capital Vision | The management of human capital had passionate scholars all over time. The managers have taken a long time to find the best way to monetize their human assets. The era of knowledge is fundamental to know and identify the vectors of intellectual capital associated with organizational productivity. In this article the dynamics associated with empirical research capable of produce an explanatory model is the subject. These studies are part of a major line of research initiated by Antonio Martins in 2000 and has been the subject of numerous publications. Following the seminal studies of Edvinsson and Nonaka, the authors seek to show a model of intellectual capital, from the theoretical conceptions of the authors of human capital. First presents a literature review, followed by the explanation and method. And then present the results and discussion on them. Finally explore the model originally presented by Antonio Martins, now validated and supported. After all, in the knowledge society, the domain of the management of intellectual capital is increasingly the difference. |
Transmission electron microscopy for the diagnosis of epidermolysis bullosa. | Transmission electron microscopy (TEM) has long been the best available method for the diagnosis of epidermolysis bullosa. Today, TEM is largely superseded by immunofluorescence microscopy mapping, which is generally more available. This article discusses its continuing role in confirming or refining results obtained by other methods, or in establishing the diagnosis where other techniques have been unsuitable or have failed. It covers key steps for optimizing tissue preparation, features of analysis, recently classified epidermolysis bullosa disorders, and strengths and weaknesses of TEM. |
Radio Frequency Identification : Supply Chain Impact and Implementation Challenges | Radio Frequency Identification (RFID) technology has received considerable attention from practitioners, driven by mandates from major retailers and the United States Department of Defense. RFID technology promises numerous benefits in the supply chain, such as increased visibility, security and efficiency. Despite such attentions and the anticipated benefits, RFID is not well-understood and many problems exist in the adoption and implementation of RFID. The purpose of this paper is to introduce RFID technology to practitioners and academicians by systematically reviewing the relevant literature, discussing how RFID systems work, their advantages, supply chain impacts, and the implementation challenges and the corresponding strategies, in the hope of providing guidance for practitioners in the implementation of RFID technology and offering a springboard for academicians to conduct future research in this area. |
Coarse-to-Fine Auto-Encoder Networks (CFAN) for Real-Time Face Alignment | Accurate face alignment is a vital prerequisite step for most face perception tasks such as face recognition, facial expression analysis and non-realistic face re-rendering. It can be formulated as the nonlinear inference of the facial landmarks from the detected face region. Deep network seems a good choice to model the nonlinearity, but it is nontrivial to apply it directly. In this paper, instead of a straightforward application of deep network, we propose a Coarse-to-Fine Auto-encoder Networks (CFAN) approach, which cascades a few successive Stacked Auto-encoder Networks (SANs). Specifically, the first SAN predicts the landmarks quickly but accurately enough as a preliminary, by taking as input a low-resolution version of the detected face holistically. The following SANs then progressively refine the landmark by taking as input the local features extracted around the current landmarks (output of the previous SAN) with higher and higher resolution. Extensive experiments conducted on three challenging datasets demonstrate that our CFAN outperforms the state-of-the-art methods and performs in real-time(40+fps excluding face detection on a desktop). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.