title
stringlengths 8
300
| abstract
stringlengths 0
10k
|
---|---|
Sparsity-based image denoising via dictionary learning and structural clustering | Where does the sparsity in image signals come from? Local and nonlocal image models have supplied complementary views toward the regularity in natural images — the former attempts to construct or learn a dictionary of basis functions that promotes the sparsity; while the latter connects the sparsity with the self-similarity of the image source by clustering. In this paper, we present a variational framework for unifying the above two views and propose a new denoising algorithm built upon clustering-based sparse representation (CSR). Inspired by the success of l1-optimization, we have formulated a double-header l1-optimization problem where the regularization involves both dictionary learning and structural structuring. A surrogate-function based iterative shrinkage solution has been developed to solve the double-header l1-optimization problem and a probabilistic interpretation of CSR model is also included. Our experimental results have shown convincing improvements over state-of-the-art denoising technique BM3D on the class of regular texture images. The PSNR performance of CSR denoising is at least comparable and often superior to other competing schemes including BM3D on a collection of 12 generic natural images. |
Working memory. | The term working memory refers to a brain system that provides temporary storage and manipulation of the information necessary for such complex cognitive tasks as language comprehension, learning, and reasoning. This definition has evolved from the concept of a unitary short-term memory system. Working memory has been found to require the simultaneous storage and processing of information. It can be divided into the following three subcomponents: (i) the central executive, which is assumed to be an attentional-controlling system, is important in skills such as chess playing and is particularly susceptible to the effects of Alzheimer's disease; and two slave systems, namely (ii) the visuospatial sketch pad, which manipulates visual images and (iii) the phonological loop, which stores and rehearses speech-based information and is necessary for the acquisition of both native and second-language vocabulary. |
Metaphor and Lexical Semantics | This paper shows that several sorts of expressions cannot be interpreted metaphorically, including determiners, tenses, etc. Generally, functional categories cannot be interpreted metaphorically, while lexical categories can. This reveals a semantic property of functional categories, and it shows that metaphor can be used as a probe for investigating them. It also reveals an important linguistic constraint on metaphor. The paper argues this constraint applies to the interface between the cognitive systems for language and metaphor. However, the constraint does not completely prevent structural elements of language from being available to the metaphor system. The paper shows that linguistic structure within the lexicon, specifically, aspectual structure, is available to the metaphor system. This paper takes as its starting point an observation about which sorts of expressions can receive metaphorical interpretations. Surprisingly, there are a number of expressions that cannot be interpreted metaphorically. Quantifier expressions (i.e. determiners) provide a good example. Consider a richly metaphorical sentence like: (1) Read o’er the volume of young Paris’ face, And find delight writ there with beauty’s pen; Examine every married lineament (Romeo and Juliet I.3). Metaphor and Lexical Semantics 2 In appreciating Shakespeare’s lovely use of language, writ and pen are obviously understood metaphorically, and married lineament must be too. (The meanings listed in the Oxford English Dictionary for lineament include diagram, portion of a body, and portion of the face viewed with respect to its outline.) In spite of all this rich metaphor, every means simply every, in its usual literal form. Indeed, we cannot think of what a metaphorical interpretation of every would be. As we will see, this is not an isolated case: while many expressions can be interpreted metaphorically, there is a broad and important group of expressions that cannot. Much of this paper will be devoted to exploring the significance of this observation. It shows us something about metaphor. In particular, it shows that there is a non-trivial linguistic constraint on metaphor. This is a somewhat surprising result, as one of the leading ideas in the theory of metaphor is that metaphor comprehension is an aspect of our more general cognitive abilities, and not tied to the specific structure of language. The constraint on metaphor also shows us something about linguistic meaning. We will see that the class of expressions that fail to have metaphorical interpretations is a linguistically important one. Linguistic items are often grouped into two classes: lexical categories, including nouns, verbs, etc., and functional categories, including determiners (quantifier expressions), tenses, etc. Generally, we will see that lexical categories can have metaphorical interpretations, while functional ones cannot. This reveals something about the kinds of semantic properties these expressions can have. It also shows that we can use the availability of metaphorical interpretation as a kind of probe, to help distinguish these sorts of categories. Functional categories are often described as ‘structural elements’ of language. They are the ‘linguistic glue’ that holds sentences together, and so, their expressions are described as being semantically ‘thin’. Our metaphor probe will give some substance to this (often very rough-andready) idea. But it raises the question of whether all such structural elements in language—anything we can describe as ‘linguistic glue’— are invisible when it comes to metaphorical interpretation. We will see that this is not so. In particular, we will see that linguistic structure that can be found within lexical items may be available to metaphorical interpretation. This paper will show specifically that so-called aspecVol. 3: A Figure of Speech |
Development of Generalized Photovoltaic Model Using MATLAB / SIMULINK | temperature into consideration, the output current and power characteristics of PV model are simulated and optimized using the proposed model. This enables the dynamics of PV power system to be easily simulated, analyzed, and optimized. |
Bang–Bang Control Class-D Amplifiers: Power-Supply Noise | In this paper, the bang-bang control class-D amplifiers (bang-bang amp) is suggested as an alternative to the incumbent pulsewidth-modulation class-D amp for low voltage power-critical applications, including hearing aids. The effects of power supply noise, qualified by power-supply rejection ratio (PSRR), on two bang-bang amps, bang-bang type-I and type-II, are investigated for low signal bandwidth hearing aid applications and for the usual full audio bandwidth applications. By deriving analytical expressions for PSRR of these amps, the important parameters related to PSRR are analyzed. The analyses are verified by means of HSPICE simulations and by measurements on practical circuits. The relationships derived herein provide good practical insight to the design of bang-bang amps, including how various parameters may be varied/optimized to meet a given PSRR specification. |
Network QoS Management in Cyber-Physical Systems | Technical advances in ubiquitous sensing, embedded computing, and wireless communication are leading to a new generation of engineered systems called cyber-physical systems (CPS). CPS promises to transform the way we interact with the physical world just as the Internet transformed how we interact with one another. Before this vision becomes a reality, however, a large number of challenges have to be addressed. Network quality of service (QoS) management in this new realm is among those issues that deserve extensive research efforts. It is envisioned that wireless sensor/actuator networks (WSANs) will play an essential role in CPS. This paper examines the main characteristics of WSANs and the requirements of QoS provisioning in the context of cyber-physical computing. Several research topics and challenges are identified. As a sample solution, a feedback scheduling framework is proposed to tackle some of the identified challenges. A simple example is also presented that illustrates the effectiveness of the proposed solution. |
Effects of dietary fat intervention on mental health in women. | Several studies have identified potential detrimental sequelae of cholesterol and fat-lowering interventions in randomized trial. Little research has been published to document changes in mental health in women as a result of fat and cholesterol lowering interventions to prevent chronic disease. This paper examines the relationships among changes in dietary fat consumption and mental health in the Women's Health Trial, a randomized, controlled trial to determine whether lowering fat consumption to 20% of daily calories could reduce the incidence of breast cancer in women ages 45-69 years. Assessments were made at baseline and at the 12-month follow-up of several aspects of quality of life, including negative and positive affect and past, present, and future perceptions of health. Mental health variables were measured by the Mental Health Inventory, a standardized scale used in the Medical Outcomes study. Dietary intake was assessed for all subjects with the use of semiquantitative food frequency questionnaires. The change in mental health values (follow-up minus baseline) was significantly different between intervention and control groups for three of the four psychological variables: (a) anxiety; (b) depression; and (c) vigor. In all three cases, the direction of the change for intervention women was positive. Neither randomization assignment nor percent of calories from fat at the follow-up visit were significant predictors of mental health at the 1-year follow-up. Cholesterol changes were not related to levels of mental health variables in a sample of the women. These data indicate that lowering fat in the diets of healthy women does not produce overall lowering of any mental health variables.(ABSTRACT TRUNCATED AT 250 WORDS) |
Predicting and Retrospective Analysis of Soccer Matches in a League | A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season. |
Characterisation of Mediterranean Grape Pomace Seed and Skin Extracts: Polyphenolic Content and Antioxidant Activity | Grape pomace seeds and skins from different Mediterranean varieties (Grenache [GRE], Syrah [SYR], Carignan [CAR], Mourvèdre [MOU] and Alicante [ALI]) were extracted using water and water/ethanol 70% in order to develop edible extracts (an aqueous extract [EAQ] and a 70% hydro-alcoholic extract [EA70]) for potential use in nutraceutical or cosmetic formulations. In this study, global content (total polyphenols, total anthocyanins and total tannins), flavan-3-ols and anthocyanins were assessed using HPLC-UV-Fluo-MSn. In addition, extract potential was evaluated by four different assays: Oxygen Radical Absorbance Capacity (ORAC), Ferric Reducing Antioxidant Potential assay (FRAP), Trolox equivalent antioxidant capacity (TEAC) or ABTS assay and 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging assay. As expected, seed pomace extracts contained higher amounts of polyphenols then skin pomace extracts. Indeed, seeds from Syrah contained a particularly important amount of total polyphenols and tannins in both type of extract (up to 215.84 ± 1.47 mg of gallic acid equivalent [GAE]/g dry weight (DW) and 455.42 ± 1.84 mg/g DW, respectively). These extracts also expressed the highest antioxidant potential with every test. For skins, the maximum total phenolic was found in Alicante EAQ (196.71 ± 0.37 mg GAE/g DW) and in Syrah EA70 (224.92 ± 0.18 mg GAE/g DW). Results obtained in this article constitute a useful tool for the pre-selection of grape pomace seed and skin extracts for nutraceutical purposes. |
Learning Spatial-Semantic Context with Fully Convolutional Recurrent Network for Online Handwritten Chinese Text Recognition | Online handwritten Chinese text recognition (OHCTR) is a challenging problem as it involves a large-scale character set, ambiguous segmentation, and variable-length input sequences. In this paper, we exploit the outstanding capability of path signature to translate online pen-tip trajectories into informative signature feature maps, successfully capturing the analytic and geometric properties of pen strokes with strong local invariance and robustness. A multi-spatial-context fully convolutional recurrent network (MC-FCRN) is proposed to exploit the multiple spatial contexts from the signature feature maps and generate a prediction sequence while completely avoiding the difficult segmentation problem. Furthermore, an implicit language model is developed to make predictions based on semantic context within a predicting feature sequence, providing a new perspective for incorporating lexicon constraints and prior knowledge about a certain language in the recognition procedure. Experiments on two standard benchmarks, Dataset-CASIA and Dataset-ICDAR, yielded outstanding results, with correct rates of 97.50 and 96.58 percent, respectively, which are significantly better than the best result reported thus far in the literature. |
Children's engagement with educational iPad apps: Insights from a Spanish classroom | This study investigates the effects of a story-making app called Our Story and a selection of other educational apps on the learning engagement of forty-one Spanish 4–5-year-olds. Children were observed interacting in small groups with the story-making app and this was compared to their engagement with a selection of construction and drawing apps. Children’s engagement was analysed in two ways: it was categorised using Bangert-Drowns and Pyke’s taxonomy for individual hands-on engagement with educational software, and using the concept of exploratory talk as developed by Mercer et al. to analyse peer engagement. For both approaches, quantitative and qualitative indices of children’s engagement were considered. The overall findings suggested that in terms of the BangertDrowns and Pyke taxonomy, the quality of children’s individual engagement was higher with the OS app in contrast to their engagement with other app software. The frequency of children’s use of exploratory talk was similar with the OS and colouring and drawing apps, and a detailed qualitative analysis of the interaction transcripts revealed several instances of the OS and drawing apps supporting joint problem-solving and collaborative engagement. We suggest that critical indices of an app’s educational value are the extent to which the app supports opportunities for open-ended content and children’s independent use of increasingly difficult features. 2013 Elsevier Ltd. All rights reserved. |
CTLA-4 and PD-1 Pathways : Similarities , Differences , and Implications of Their Inhibition | The cytotoxic T-lymphocyte–associated antigen 4 (CTLA4) and programmed death 1 (PD-1) immune checkpoints are negative regulators of T-cell immune function. Inhibition of these targets, resulting in increased activation of the immune system, has led to new immunotherapies for melanoma, non–small cell lung cancer, and other cancers. Ipilimumab, an inhibitor of CTLA-4, is approved for the treatment of advanced or unresectable melanoma. Nivolumab and pembrolizumab, both PD-1 inhibitors, are approved to treat patients with advanced or metastatic melanoma and patients with metastatic, refractory non-small cell lung cancer. In addition the combination of ipilimumab and nivolumab has been approved in patients with BRAF WT metastatic or unresectable melanoma. The roles of CTLA-4 and PD-1 in inhibiting immune responses, including antitumor responses, are largely distinct. CTLA-4 is thought to regulate T-cell proliferation early in an immune response, primarily in lymph nodes, whereas PD-1 suppresses T cells later in an immune response, primarily in peripheral tissues. The clinical profiles of immuno-oncology agents inhibiting these 2 checkpoints may vary based on their mechanistic differences. This article provides an overview of the CTLA-4 and PD-1 pathways and implications of their inhibition in cancer therapy. |
Feasibility Investigation of Low Cost Substrate Integrated Waveguide ( SIW ) Directional Couplers | In this paper, the feasibility of Substrate Integrated Waveguide (SIW) couplers, fabricated using single-layer TACONIC RF-35 dielectric substrate is investigated. The couplers have been produced employing a standard PCB process. The choice of the TACONIC RF-35 substrate as alternative to other conventional materials is motivated by its lower cost and high dielectric constant, allowing the reduction of the device size. The coupler requirements are 90-degree phase shift between the output and the coupled ports and frequency bandwidth from about 10.5 GHz to 12.5 GHz. The design and optimization of the couplers have been performed by using the software CST Microwave Studio c ©. Eight different coupler configurations have been designed and compared. The better three couplers have been fabricated and characterized. The proposed SIW directional couplers could be integrated within more complex planar circuits or utilized as stand-alone devices, because of their compact size. They exhibit good performance and could be employed in communication applications as broadcast signal distribution and as key elements for the construction of other microwave devices and systems. |
PrivTree: A Differentially Private Algorithm for Hierarchical Decompositions | Given a set D of tuples defined on a domain Omega, we study differentially private algorithms for constructing a histogram over Omega to approximate the tuple distribution in D. Existing solutions for the problem mostly adopt a hierarchical decomposition approach, which recursively splits Omega into sub-domains and computes a noisy tuple count for each sub-domain, until all noisy counts are below a certain threshold. This approach, however, requires that we (i) impose a limit h on the recursion depth in the splitting of Omega and (ii) set the noise in each count to be proportional to h. The choice of h is a serious dilemma: a small h makes the resulting histogram too coarse-grained, while a large h leads to excessive noise in the tuple counts used in deciding whether sub-domains should be split. Furthermore, h cannot be directly tuned based on D; otherwise, the choice of h itself reveals private information and violates differential privacy. To remedy the deficiency of existing solutions, we present PrivTree, a histogram construction algorithm that adopts hierarchical decomposition but completely eliminates the dependency on a pre-defined h. The core of PrivTree is a novel mechanism that (i) exploits a new analysis on the Laplace distribution and (ii) enables us to use only a constant amount of noise in deciding whether a sub-domain should be split, without worrying about the recursion depth of splitting. We demonstrate the application of PrivTree in modelling spatial data, and show that it can be extended to handle sequence data (where the decision in sub-domain splitting is not based on tuple counts but a more sophisticated measure). Our experiments on a variety of real datasets show that PrivTree considerably outperforms the states of the art in terms of data utility. |
A Multi-Layered Annotated Corpus of Scientific Papers | Scientific literature records the research process with a standardized structure and provides the clues to track the progress in a scientific field. Understanding its internal structure and content is of paramount importance for natural language processing (NLP) technologies. To meet this requirement, we have developed a multi-layered annotated corpus of scientific papers in the domain of Computer Graphics. Sentences are annotated with respect to their role in the argumentative structure of the discourse. The purpose of each citation is specified. Special features of the scientific discourse such as advantages and disadvantages are identified. In addition, a grade is allocated to each sentence according to its relevance for being included in a summary.To the best of our knowledge, this complex, multi-layered collection of annotations and metadata characterizing a set of research papers had never been grouped together before in one corpus and therefore constitutes a newer, richer resource with respect to those currently available in the field. |
Discovering actionable patterns in event data | Applications such as those for systems management and intrusion detection employ an automated real-time operation system in which sensor data are collected and processed in real time. Although such a system effectively reduces the need for operation staff, it requires constructing and maintaining correlation rules. Currently, rule construction requires experts to identify problem patterns, a process that is timeconsuming and error-prone. In this paper, we propose reducing this burden by mining historical data that are readily available. Specifically, we first present efficient algorithms to mine three types of important patterns from historical event data: event bursts, periodic patterns, and mutually dependent patterns. We then discuss a framework for efficiently mining events that have multiple attributes. Last, we present Event Correlation Constructor—a tool that validates and extends correlation knowledge. |
Straddle injuries in female children and adolescents: 10-year accident and management analysis. | OBJECTIVE
To analyze unintentional straddle injuries in girls with regards to epidemiology, etiology and injury management.
METHODS
The hospital database was retrospectively reviewed (1999-2009) for female patients managed for genital trauma. Patients were evaluated based on age, causative factors, type of injury, area of genitals affected, management and outcomes.
RESULTS
Straddle injuries were documented in 91 girls with age ranging from 1 to 15 y (mean = 6.3 y; median = 6.1 y). The causes of injuries were falls at home (n = 31) or outdoors (n = 27), and sport activities (swimming pool n = 11, skating n = 11, bicycle n = 9 and scooter n = 2). Most of the injuries were lacerations. Injuries involved major labia (n = 56), minor labia (n = 45) and introitus vaginae (n = 15). Twelve children received outpatient treatment. Inspection under anesthesia was performed in 79 patients, with 76 requiring sutures. While hematuria was observed in 18 patients, cystoscopy did not reveal lesions in the urethra or bladder. Associated injuries were femur fracture (n = 1), lower extremity lacerations (n = 4) and anal lesions (n = 2). Follow-up investigations were uneventful; however one patient developed a secondary abscess and another secondary hyperplasia of the labia minor.
CONCLUSIONS
Falls and sports are major causes of straddle injuries with a peak at the age of six years. Lacerations are the most common injuries and often require surgical management. Urinary tract injuries and other associated injuries are relatively uncommon in girls with straddle injuries. |
Coping with compassion fatigue. | Helping others who have undergone a trauma from a natural disaster, accident, or sudden act of violence, can be highly satisfying work. But helping trauma victims can take a toll on even the most seasoned mental health professional. Ongoing exposure to the suffering of those you are helping can bring on a range of signs and symptoms -including anxiety, sleeplessness, irritability, and feelings of helplessness -that can interfere, sometimes significantly, with everyday life and work. In clinicians, including therapists, counselors, and social workers, this response is often referred to as “compassion fatigue” or “secondary post-traumatic stress.” |
Checkpointing algorithms and fault prediction | This paper deals with the impact of fault prediction techniques on checkpointing strategies. We extend the classical first-order analysis of Young and Daly in the presence of a fault prediction system, characterized by its recall and its precision. In this framework, we provide an optimal algorithm to decide when to take predictions into account, and we derive the optimal value of the checkpointing period. These results allow to analytically assess the key parameters that impact the performance of fault predictors at very large scale. Key-words: Fault-tolerance, checkpointing, prediction, algorithms, model, exascale ∗ LIP, École Normale Supérieure de Lyon, France † University of Tennessee Knoxville, USA ‡ Institut Universitaire de France § INRIA ha l-0 07 88 31 3, v er si on 1 14 F eb 2 01 3 Étude de l’impact de la prédiction de fautes sur les stratégies de protocoles de checkpoint Résumé : Ce travail considère l’impact des techniques de prédiction de fautes sur les stratégies de protocoles de sauvegarde de points de reprise (checkpoints) et de redémarrage. Nous étendons l’analyse classique de Young en présence d’un système de prédiction de fautes, qui est caractérisé par son rappel (taux de pannes prévues sur nombre total de pannes) et par sa précision (taux de vraies pannes parmi le nombre total de pannes annoncées). Dans ce travail, nous avons pu obtenir la valeur optimale de la période de checkpoint (minimisant ainsi le gaspillage de l’utilisation des ressources dû au coût de prise de ces points de sauvegarde) dans différents scénarios. Ce papier pose les fondations théoriques pour de futures expériences et une validation du modèle. Mots-clés : Tolérance aux pannes, checkpoint, prédiction, algorithmes, modèle, exascale ha l-0 07 88 31 3, v er si on 1 14 F eb 2 01 3 Checkpointing algorithms and fault prediction 3 |
Video analysis of injuries and incidents in Norwegian professional football. | OBJECTIVES
This study describes the characteristics of injuries and high risk situations in the Norwegian professional football league during one competitive season using Football Incident Analysis (FIA), a video based method.
METHODS
Videotapes and injury information were collected prospectively for 174 of 182 (96%) regular league matches during the 2000 season. Incidents where the match was interrupted due to an assumed injury were analysed using FIA to examine the characteristics of the playing situation causing the incident. Club medical staff prospectively recorded all acute injuries on a specific injury questionnaire. Each incident identified on the videotapes was cross referenced with the injury report.
RESULTS
During the 174 matches, 425 incidents were recorded and 121 acute injuries were reported. Of these 121 injuries, 52 (43%) were identified on video including all head injuries, 58% of knee injuries, 56% of ankle injuries, and 29% of thigh injuries. Strikers were more susceptible to injury than other players and although most of the incidents and injuries resulted from duels, no single classic injury situation typical for football injuries or incidents could be recognised. However, in most cases the exposed player seemed to be unaware of the opponent challenging him for ball possession.
CONCLUSIONS
This study shows that in spite of a thorough video analysis less than half of the injuries are identified on video. It is difficult to identify typical patterns in the playing events leading to incidents and injuries, but players seemed to be unaware of the opponent challenging them for ball possession. |
EBG Antennas: Their Design and Performance Analysis for Wireless Applications | Microstrip patch antennas became very popular in mobile and radio wireless communication, due to ease of their analysis, fabrication, and attractive radiation characteristics. The use of Microstrip antenna in wireless communication found advantageous compared to other types of antenna due to their low fabrication cost, small size, supporting character to linear as well as circular polarization, robustness when mounted on rigid surfaces. However, they have their own limitations due to low efficiency, narrow bandwidth, surface wave loss and low gain. Electromagnetic Band Gap (EBG) materials, as superstrate is used to overcome the limitations of Microstrip patch antenna. The main aim of this paper is to implement EBG antenna and compare their characteristics at the frequency 2.4GHz using simulation. These designs are simulated using High Frequency Structure Simulator (HFSS) tool. General Terms Antenna Theory, Performance and Design, Analysis, Characteristics, Communication, Wireless applications. |
Repetitive transcranial magnetic stimulation for levodopa-induced dyskinesias in Parkinson's disease. | In a placebo-controlled, single-blinded, crossover study, we assessed the effect of "real" repetitive transcranial magnetic stimulation (rTMS) versus "sham" rTMS (placebo) on peak dose dyskinesias in patients with Parkinson's disease (PD). Ten patients with PD and prominent dyskinesias had rTMS (1,800 pulses; 1 Hz rate) delivered over the motor cortex for 4 consecutive days twice, once real stimuli and once sham stimulation were used; evaluations were done at the baseline and 1 day after the end of each of the treatment series. Direct comparison between sham and real rTMS effects showed no significant difference in clinician-assessed dyskinesia severity. However, comparison with the baseline showed small but significant reduction in dyskinesia severity following real rTMS but not placebo. The major effect was on dystonia subscore. Similarly, in patient diaries, although both treatments caused reduction in subjective dyskinesia scores during the days of intervention, the effect was sustained for 3 days after the intervention for the real rTMS only. Following rTMS, no side effects and no adverse effects on motor function and PD symptoms were noted. The results suggest the existence of residual beneficial clinical aftereffects of consecutive daily applications of low-frequency rTMS on dyskinesias in PD. The effects may be further exploited for potential therapeutic uses. |
Prognostic significance of S100A4 expression in stage II and III colorectal cancer: results from a population‐based series and a randomized phase III study on adjuvant chemotherapy | Current clinical algorithms are unable to precisely predict which colorectal cancer patients would benefit from adjuvant chemotherapy, and there is a need for novel biomarkers to improve the selection of patients. The metastasis-promoting protein S100A4 predicts poor outcome in colorectal cancer, but whether it could be used to guide clinical decision making remains to be resolved. S100A4 expression was analyzed by immunohistochemistry in primary colorectal carcinomas from a consecutively collected, population-representative cohort and a randomized phase III study on adjuvant 5-fluorouracil/levamisole. Sensitivity to treatment with 5-fluorouracil in S100A4 knockdown cells was investigated using 2D and 3D cell culture assays. Strong nuclear expression of S100A4 was detected in 19% and 23% of the tumors in the two study cohorts, respectively. In both cohorts, nuclear immunoreactivity was associated with reduced relapse-free (P < 0.001 and P = 0.010) and overall survival (P = 0.046 and P = 0.006) in univariate analysis. In multivariate analysis, nuclear S100A4 was a predictor of poor relapse-free survival in the consecutive series (P = 0.002; HR 1.9), but not in the randomized study. Sensitivity to treatment with 5-fluorouracil was not affected by S100A4 expression in in vitro cell culture assays, and there was no indication from subgroup analyses in the randomized study that S100A4 expression was associated with increased benefit of adjuvant treatment with 5-fluorouracil/levamisole. The present study confirms that nuclear S100A4 expression is a negative prognostic biomarker in colorectal cancer, but the clinical utility in selection of patients for adjuvant fluoropyrimidine-based chemotherapy is limited. |
Feature Clustering for Anomaly Detection Using Improved Fuzzy Membership Function | Earlier research focus towards anomaly detection has been towards using classifiers such as kNN, SVM and using existing distance measures to perform classification. Traditionally IDSs (Intrusion detection systems) have been developed by applying machine learning techniques and adopted single learning mechanism. This is later extended by developing Intrusion Detection Systems by adopting multiple learning mechanisms. Such systems have addressed better detection rates compared to single learning Intrusion Detection Systems. Dimensionality is one more serious concern which affects the performance of classification algorithms. Approaches such as "Feature selection" have been studied and adopted which selects a subset features from the feature set. However, the feature extraction approach for dimensionality reduction has proved to be better compared to feature selection and achieved better classification and detection rates. In this research, we address "Feature extraction" using "Evolutionary feature clustering" by proposing a "Novel fuzzy membership function" which addresses Dimensionality Reduction (DR). The idea is to transform the initial connection representation so that its equivalent representation has reduced noise affect and achieves better classification or detection rates. Experimental results on KDD datasets with 19 and 41 attributes, prove that the proposed approach has improved detection rates for R2L and U2R attack classes when compared to CANN, CLAPP, and SVM approaches. CANN approach recorded lower detection rates w.r.t U2R and R2L attacks. This failure is addressed in our earlier studies through proposing, CLAPP which proved comparatively better accuracy rates to CANN. The fuzzy membership function proposed in this paper, recorded better classification and detection rates in experiments conducted |
P , NP and Mathematics – a computational complexity perspective | The P versus NP question distinguished itself as the central question of Theoretical Computer Science nearly four decades ago. The quest to resolve it, and more generally, to understand the power and limits of efficient computation, has led to the development of Computational Complexity Theory. While this mathematical discipline in general, and the P vs. NP problem in particular, have gained prominence within the mathematics community in the past decade, it is still largely viewed as a problem of Computer Science. In this paper I’ll try to explain why this problem, and others in computational complexity, are not only mathematical problems but also problems about mathematics, faced by the working mathematician. I will describe the underlying concepts and problems, the attempts to understand and solve them, and some of the research directions this led us to. I will explain some of the important results, as well as the major goals and conjectures which still elude us. All this will hopefully give a taste of the motivations, richness and interconnectedness of our field. I will conclude with a few non computational problems, which capture P vs. NP and related computational complexity problems, hopefully inviting more mathematicians to attack them as well. I believe it important to give many examples, and to underlie the intuition (and sometimes, philosophy) behind definitions and results. This may slow the pace of this article for some, in the hope to make it clearer to others. Mathematics Subject Classification (2000). Primary 68Q15; Secondary 68Q17. |
An alginate-antacid formulation (Gaviscon Double Action Liquid) can eliminate or displace the postprandial 'acid pocket' in symptomatic GERD patients. | BACKGROUND
Recently, an 'acid pocket' has been described in the proximal stomach, particularly evident postprandially in GERD patients, when heartburn is common. By creating a low density gel 'raft' that floats on top of gastric contents, alginate-antacid formulations may neutralise the 'acid pocket'.
AIM
To assess the ability of a commercial high-concentration alginate-antacid formulation to neutralize and/or displace the acid pocket in GERD patients.
METHODS
The 'acid pocket' was studied in ten symptomatic GERD patients. Measurements were made using concurrent stepwise pH pull-throughs, high resolution manometry and fluoroscopy in a semi-recumbent posture. Each subject was studied in three conditions: fasted, 20 min after consuming a high-fat meal and 20 min later after a 20 mL oral dose of an alginate-antacid formulation (Gaviscon Double Action Liquid, Reckitt Benckiser Healthcare, Hull, UK). The relative position of pH transition points (pH >4) to the EGJ high-pressure zone was analysed.
RESULTS
Most patients (8/10) exhibited an acidified segment extending from the proximal stomach into the EGJ when fasted that persisted postprandially. Gaviscon neutralised the acidified segment in six of the eight subjects shifting the pH transition point significantly away from the EGJ. The length and pressure of the EGJ high-pressure zone were minimally affected.
CONCLUSIONS
Gaviscon can eliminate or displace the 'acid pocket' in GERD patients. Considering that EGJ length was unchanged throughout, this effect was likely attributable to the alginate 'raft' displacing gastric contents away from the EGJ. These findings suggest the alginate-antacid formulation to be an appropriately targeted postprandial GERD therapy. |
Modeling and Simulation of Electric and Hybrid Vehicles | This paper discusses the need for modeling and simulation of electric and hybrid vehicles. Different modeling methods such as physics-based Resistive Companion Form technique and Bond Graph method are presented with powertrain component and system modeling examples. The modeling and simulation capabilities of existing tools such as Powertrain System Analysis Toolkit (PSAT), ADvanced VehIcle SimulatOR (ADVISOR), PSIM, and Virtual Test Bed are demonstrated through application examples. Since power electronics is indispensable in hybrid vehicles, the issue of numerical oscillations in dynamic simulations involving power electronics is briefly addressed |
ISLES 2015 - A public evaluation benchmark for ischemic stroke lesion segmentation from multispectral MRI | Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non-invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub-challenges: Sub-Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state-of-the-art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state-of-the-art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub-acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles-challenge.org). |
Pattern Recognition Receptors and Inflammation | Infection of cells by microorganisms activates the inflammatory response. The initial sensing of infection is mediated by innate pattern recognition receptors (PRRs), which include Toll-like receptors, RIG-I-like receptors, NOD-like receptors, and C-type lectin receptors. The intracellular signaling cascades triggered by these PRRs lead to transcriptional expression of inflammatory mediators that coordinate the elimination of pathogens and infected cells. However, aberrant activation of this system leads to immunodeficiency, septic shock, or induction of autoimmunity. In this Review, we discuss the role of PRRs, their signaling pathways, and how they control inflammatory responses. |
GPS determined eastward Sundaland motion with respect to Eurasia confirmed by earthquakes slip vectors at Sunda and Philippine trenches | GPS measurements acquired over Southeast Asia in 1994 and 1996 in the framework of the GEODYSSEA program revealed that a large piece of continental lithosphere comprising the Indochina Peninsula, Sunda shelf and part of Indonesia behaves as a rigid ‘Sundaland’ platelet. A direct adjustment of velocity vectors obtained in a Eurasian frame of reference shows that Sundaland block is rotating clockwise with respect to Eurasia around a pole of rotation located south of Australia. We present here an additional check of Sundaland motion that uses earthquakes slip vectors at Sunda and Philippine trenches. Seven sites of the GEODYSSEA network are close to the trenches and not separated from them by large active faults (two at Sumatra Trench, three at Java Trench and two at the Philippine Trench). The difference between the vector at the station and the adjacent subducting plate vector defines the relative subduction motion and should thus be aligned with the subduction earthquake slip vectors. We first derive a frame-free solution that minimizes the upper plate (or Sundaland) motion. When corrected for Australia–Eurasia and Philippines–Eurasia NUVEL1-A motion, the misfit between GPS and slip vectors azimuths is significant at 95% confidence, indicating that the upper plate does not belong to Eurasia. We then examine the range of solutions compatible with the slip vectors azimuths and conclude that the minimum velocity of Sundaland is a uniform 7–10 mm=a eastward velocity. However, introducing the additional constraint of the fit of the GEODYSSEA sites with the Australian IGS reference ones, or tie with the NTUS Singapore station, leads to a much narrower range of solutions. We conclude that Sundaland has an eastward velocity of about 10 mm=a on its southern boundary increasing to 16–18 mm=a on its northern boundary. 1999 Elsevier Science B.V. All rights reserved. |
A graph based clustering technique for tweet summarization | Twitter is a very popular online social networking site, where hundreds of millions of tweets are posted every day by millions of users. Twitter is now considered as one of the fastest and most popular communication mediums, and is frequently used to keep track of recent events or news-stories. Whereas tweets related to a particular event / news-story can easily be found using keyword matching, many of the tweets are likely to contain semantically identical information. If a user wants to keep track of an event / news-story, it is difficult for him to have to read all the tweets containing identical or redundant information. Hence, it is desirable to have good techniques to summarize large number of tweets. In this work, we propose a graph-based approach for summarizing tweets, where a graph is first constructed considering the similarity among tweets, and community detection techniques are then used on the graph to cluster similar tweets. Finally, a representative tweet is chosen from each cluster to be included into the summary. The similarity among tweets is measured using various features including features based on WordNet synsets which help to capture the semantic similarity among tweets. The proposed approach achieves better performance than Sumbasic, an existing summarization technique. |
A Survey on Parts of Speech Tagging for Indian Languages | Part of speech (POS) tagging is basically the process of automatically assigning its lexical category to each word according to its context and definition. Each word of sentence is marked in croups as corresponding to a particular part of speech like noun, verb, adjective and adverb. POS serves as a first step in natural language process applications like information extraction, parsing, and word sense disambiguation etc. this paper presents a survey on Part of Speech taggers used for Indian languages. The main problem of tagging is to find proper way to tag each word according to particular part of speech. Very less work has been done for POS tagging on Indian languages mainly due to morphologically richness. In this paper, various techniques are discussed that are used for development of POS |
Datalog: Bag Semantics via Set Semantics | Duplicates in data management are common and problematic. In this work, we present a translation of Datalog under bag semantics into a well-behaved extension of Datalog, the so-called warded Datalog±, under set semantics. From a theoretical point of view, this allows us to reason on bag semantics by making use of the well-established theoretical foundations of set semantics. From a practical point of view, this allows us to handle the bag semantics of Datalog by powerful, existing query engines for the required extension of Datalog. This use of Datalog± is extended to give a set semantics to duplicates in Datalog± itself. We investigate the properties of the resulting Datalog± programs, the problem of deciding multiplicities, and expressibility of some bag operations. Moreover, the proposed translation has the potential for interesting applications such as to Multiset Relational Algebra and the semantic web query language SPARQL with bag semantics. 2012 ACM Subject Classification Information systems → Query languages; Theory of computation → Logic; Theory of computation → Semantics and reasoning |
Using regional saliency for speech emotion recognition | In this paper, we show that convolutional neural networks can be directly applied to temporal low-level acoustic features to identify emotionally salient regions without the need for defining or applying utterance-level statistics. We show how a convolutional neural network can be applied to minimally hand-engineered features to obtain competitive results on the IEMOCAP and MSP-IMPROV datasets. In addition, we demonstrate that, despite their common use across most categories of acoustic features, utterance-level statistics may obfuscate emotional information. Our results suggest that convolutional neural networks with Mel Filterbanks (MFBs) can be used as a replacement for classifiers that rely on features obtained from applying utterance-level statistics. |
State-Frequency Memory Recurrent Neural Networks | Modeling temporal sequences plays a fundamental role in various modern applications and has drawn more and more attentions in the machine learning community. Among those efforts on improving the capability to represent temporal data, the Long Short-Term Memory (LSTM) has achieved great success in many areas. Although the LSTM can capture long-range dependency in the time domain, it does not explicitly model the pattern occurrences in the frequency domain that plays an important role in tracking and predicting data points over various time cycles. We propose the State-Frequency Memory (SFM), a novel recurrent architecture that allows to separate dynamic patterns across different frequency components and their impacts on modeling the temporal contexts of input sequences. By jointly decomposing memorized dynamics into statefrequency components, the SFM is able to offer a fine-grained analysis of temporal sequences by capturing the dependency of uncovered patterns in both time and frequency domains. Evaluations on several temporal modeling tasks demonstrate the SFM can yield competitive performances, in particular as compared with the state-of-the-art LSTM models. |
Secure routing for internet of things: A survey | The Internet of Things (IoT) could be described as the pervasive and global network which aids and provides a system for the monitoring and control of the physical world through the collection, processing and analysis of generated data by IoT sensor devices. It is projected that by 2020 the number of connected devices is estimated to grow exponentially to 50 billion. The main drivers for this growth are our everyday devices such as cars, refrigerators, fans, lights, mobile phones and other operational technologies including the manufacturing infrastructures which are now becoming connected systems across the world. It is apparent that security will pose a fundamental enabling factor for the successful deployment and use of most IoT applications and in particular secure routing among IoT sensor nodes thus, mechanisms need to be designed to provide secure routing communications for devices enabled by the IoT technology. This survey analyzes existing routing protocols and mechanisms to secure routing communications in IoT, as well as the open research issues. We further analyze how existing approaches ensure secure routing in IoT, their weaknesses, threats to secure routing in IoT and the open challenges and strategies for future research work for a better secure IoT routing. |
Content or platform : Why do students complete MOOCs ? | The advent of massive open online courses (MOOCs) poses new learning opportunities for learners as well as challenges for researchers and designers. MOOC students approach MOOCs in a range of fashions, based on their learning goals and preferred approaches, which creates new opportunities for learners but makes it difficult for researchers to figure out what a student’s behavior means, and makes it difficult for designers to develop MOOCs appropriate for all of their learners. Towards better understanding the learners who take MOOCs, we conduct a survey of MOOC learners’ motivations and correlate it to which students complete the course according to the pace set by the instructor/platform (which necessitates having the goal of completing the course, as well as succeeding in that goal). The results showed that course completers tend to be more interested in the course content, whereas non-completers tend to be more interested in MOOCs as a type of learning experience. Contrary to initial hypotheses, however, no substantial differences in mastery-goal orientation or general academic efficacy were observed between completers and non-completers. However, students who complete the course tend to have more self-efficacy for their ability to complete the course, from the beginning. |
An Agile Development Team's Quest for CMMI® Maturity Level 5 | Pragmatics performs agile development, has been rated at CMMI (Capability Maturity Model Integration) Maturity Level 4, and is striving to achieve CMMI Maturity Level (CML) 5. By maturing our agile disciplines, we feel we will not only improve the performance of our agile teams, which will ultimately benefit our agile development practices regardless of our appraisal rating, but will also lead to our being appraised at CML 5. This experience report describes the steps we are taking to improve our agile development disciplines, which we believe will lead to our being appraised at CML 5. |
Implications of combat casualty care for mass casualty events. | Violence from explosives and firearms results in mass casualty events in which the injured have multiple penetrating and soft tissue injuries. Events such as those in Boston, Massachusetts; Newtown, Connecticut; and Aurora, Colorado, as well as those in other locations, such as Europe and the Middle East, demonstrate that civilian trauma may at times resemble that seen in a combat setting. As the civilian sector prepares for and responds to these casualty scenarios, research and trauma practices that have emerged from the wars in Afghanistan and Iraq provide a valuable foundation for responding to civilian mass casualty events. Several lessons learned by the US military were implemented during the response to the bombings in Boston in April of this year. Military research has found that approximately 25% of persons who die as a result of explosive or gunshot wounds have potentially survivable wounds.1 These individuals have injuries that are not immediately or necessarily lethal and have a chance to survive if appropriate care is rendered in a timely fashion. The military has learned that implementation of evidence-based, clinical practice guidelines can reduce potentially preventable death.2 Certain aspects of these lessons also apply to multiple casualty scenarios in civilian settings. The care of wounded military service personnel is based on an integrated trauma system and involves timely point-of-injury intervention, coordinated patient transport, whole blood or blood componentbased resuscitation, and initial operating focused on control of hemorrhage and optimizing patient physiology. Referred to as damage control surgery, this approach involves abbreviated techniques instead of longer definitive operations. The principles of combat casualty care should be considered in 3 phases: point of injury, during transport to the hospital, and hospital-based treatment. The wars have highlighted the importance of a trauma system to coordinate these phases and improve survival. In implementing this strategy, the military developed the Joint Trauma System, which is designed to provide wounded troops an optimal chance for survival and recovery. |
Towards Secure and Privacy-Preserving Data Sharing in e-Health Systems via Consortium Blockchain | Electronic health record sharing can help to improve the accuracy of diagnosis, where security and privacy preservation are critical issues in the systems. In recent years, blockchain has been proposed to be a promising solution to achieve personal health information (PHI) sharing with security and privacy preservation due to its advantages of immutability. This work proposes a blockchain-based secure and privacy-preserving PHI sharing (BSPP) scheme for diagnosis improvements in e-Health systems. Firstly, two kinds of blockchains, private blockchain and consortium blockchain, are constructed by devising their data structures, and consensus mechanisms. The private blockchain is responsible for storing the PHI while the consortium blockchain keeps records of the secure indexes of the PHI. In order to achieve data security, access control, privacy preservation and secure search, all the data including the PHI, keywords and the patients’ identity are public key encrypted with keyword search. Furthermore, the block generators are required to provide proof of conformance for adding new blocks to the blockchains, which guarantees the system availability. Security analysis demonstrates that the proposed protocol can meet with the security goals. Furthermor, we implement the proposed scheme on JUICE to evaluate the performance. |
Liver conversion of docosahexaenoic and arachidonic acids from their 18-carbon precursors in rats on a DHA-free but α-LNA-containing n-3 PUFA adequate diet. | The long-chain polyunsaturated fatty acids (PUFAs), eicosapentaenoic acid (EPA, 20:5n-3), docosahexaenoic acid (DHA, 22:6n-3), and arachidonic acid (AA, 20:4n-6), are critical for health. These PUFAs can be synthesized in liver from their plant-derived precursors, α-linolenic acid (α-LNA, 18:3n-3) and linoleic acid (LA, 18:2n-6). Vegetarians and vegans may have suboptimal long-chain n-3 PUFA status, and the extent of the conversion of α-LNA to EPA and DHA by the liver is debatable. We quantified liver conversion of DHA and other n-3 PUFAs from α-LNA in rats fed a DHA-free but α-LNA (n-3 PUFA) adequate diet, and compared results to conversion of LA to AA. [U-(13)C]LA or [U-(13)C]α-LNA was infused intravenously for 2h at a constant rate into unanesthetized rats fed a DHA-free α-LNA adequate diet, and published equations were used to calculate kinetic parameters. The conversion coefficient k(⁎) of DHA from α-LNA was much higher than for AA from LA (97.2×10(-3) vs. 10.6×10(-3)min(-1)), suggesting that liver elongation-desaturation is more selective for n-3 PUFA biosynthesis on a per molecule basis. The net daily secretion rate of DHA, 20.3μmol/day, exceeded the reported brain DHA consumption rate by 50-fold, suggesting that the liver can maintain brain DHA metabolism with an adequate dietary supply solely of α-LNA. This infusion method could be used in vegetarians or vegans to determine minimal daily requirements of EPA and DHA in humans. |
Plant Growth Promoting Rhizobacteria : A Critical Review BS Saharan | Plant growth-promoting rhizobacteria (PGPR) are naturally occurring soil bacteria that aggressively colonize plant roots and benefit plants by providing growth promotion. Inoculation of crop plants with certain strains of PGPR at an early stage of development improves biomass production through direct effects on root and shoots growth. Inoculation of ornamentals, forest trees, vegetables, and agricultural crops with PGPR may result in multiple effects on early-season plant growth, as seen in the enhancement of seedling germination, stand health, plant vigor, plant height, shoot weight, nutrient content of shoot tissues, early bloom, chlorophyll content, and increased nodulation in legumes. PGPR are reported to influence the growth, yield, and nutrient uptake by an array of mechanisms. They help in increasing nitrogen fixation in legumes, help in promoting free-living nitrogen-fixing bacteria, increase supply of other nutrients, such as phosphorus, sulphur, iron and copper, produce plant hormones, enhance other beneficial bacteria or fungi, control fungal and bacterial diseases and help in controlling insect pests. There has been much research interest in PGPR and there is now an increasing number of PGPR being commercialized for various crops. Several reviews have discussed specific aspects of growth promotion by PGPR. In this review, we have discussed various bacteria which act as PGPR, mechanisms and the desirable properties exhibited by them. |
Suppressing the influence of additive noise on the Kalman gain for low residual noise speech enhancement | In this paper, we present a detailed analysis of the Kalman filter for the application of speech enhancement and identify its shortcomings when the linear predictor model parameters are estimated from speech that has been corrupted with additive noise. We show that when only noise-corrupted speech is available, the poor performance of the Kalman filter may be attributed to the presence of large values in the Kalman gain during low speech energy regions, which cause a large degree of residual noise to be present in the output. These large Kalman gain values result from poor estimates of the LPCs due to the presence of additive noise. This paper presents the analysis and application of the Kalman gain trajectory as a useful indicator of Kalman filter performance, which can be used to motivate further methods of improvement. As an example, we analyse the previously-reported application of long and overlapped tapered windows using Kalman gain trajectories to explain the reduction and smoothing of residual noise in the enhanced output. In addition, we investigate further extensions, such as Dolph–Chebychev windowing and iterative LPC estimation. This modified Kalman filter was found to have improved on the conventional and iterative versions of the Kalman filter in both objective and subjective testing. 2011 Elsevier B.V. All rights reserved. |
Defining the role of salt bridges in protein stability. | Although the energetic balance of forces stabilizing proteins has been established qualitatively over the last decades, quantification of the energetic contribution of particular interactions still poses serious problems. The reasons are the strong cooperativity and the interdependence ofnoncovalent interactions. Salt bridges are a typical example. One expects that ionizable side chains frequently form ion pairs in innumerable crystal structures. Since electrostatic attraction between opposite charges is strong per se, salt bridges can intuitively be regarded as an important factor stabilizing the native structure. Is that really so? In this chapter we critically reassess the available methods to delineate the role ofelectrostatic interactions and salt bridges to protein stability, and discuss the progress and the obstacles in this endeavor. The basic problem is that formation of salt bridges depends on the ionization properties of the participating groups, which is significantly influenced by the protein environment. Furthermore, salt bridges experience thermal fluctuations, continuously break and re-form, and their lifespan in solution is governed by the flexibility of the protein. Finally, electrostatic interactions are long-range and might be significant in the unfolded state, thus seriously influencing the energetic profile. Elimination of salt bridges by protonation/deprotonation at extreme pH or by mutation provides only rough energetic estimates, since there is no way to account for the nonadditive response of the protein moiety. From what we know so far, the strength of electrostatic interactions is strongly context-dependent, yet it is unlikely that salt bridges are dominant factors governing protein stability. Nevertheless, proteins from thermophiles and hyperthermophiles exhibit more, and frequently networked, salt bridges than proteins from the mesophilic counterparts. Increasing the thermal (not the thermodynamic) stability of proteins by optimization of charge-charge interactions is a good example for an evolutionary solution utilizing physical factors. |
FingerFlux: near-surface haptic feedback on tabletops | We introduce FingerFlux, an output technique to generate near-surface haptic feedback on interactive tabletops. Our system combines electromagnetic actuation with permanent magnets attached to the user's hand. FingerFlux lets users feel the interface before touching, and can create both attracting and repelling forces. This enables applications such as reducing drifting, adding physical constraints to virtual controls, and guiding the user without visual output. We show that users can feel vibration patterns up to 35 mm above our table, and that FingerFlux can significantly reduce drifting when operating on-screen buttons without looking. |
Theoretical construction of Morris-Thorne wormholes compatible with quantum field theory | This paper completes and extends some earlier studies by the author to show that Morris-Thorne wormholes are compatible with quantum field theory. The strategy is to strike a balance between reducing the size of the unavoidable exotic region and the degree of fine-tuning of the metric coefficients required to achieve this reduction, while simultaneously satisfying the constraints from quantum field theory. The fine-tuning also serves to satisfy various traversabilty criteria such as tidal constraints and proper distances through the wormhole. The degree of fine-tuning turns out to be a generic feature of the type of wormhole discussed. |
On gastrointestinal nematodes of Mongolian gazelle (Procapra gutturosa) | The species composition of nematodes found at autopsy of abomasa and small intestines of 24 Mongolian gazelles in Eastern Mongolia is studied. Orloffia bisonis, Marshallagia mongolica, Nematodirus archari, N. andreevi, Trichostrongylus colubriformis, T. probolurus were registered. N. archari and N. andreevi were detected in Mongolian gazelle for the first time. All species of gastrointestinal nematodes found in Mongolian gazelles have already been registered in domestic ruminants of Mongolia. The validity of Orloffia genus is confirmed based on our own observations and literature data. Orloffia is monotypic genus with the only species O. bisonis represented by two morphs where “O. bisonis” is major and “O. kasakhstanica” is minor. |
Severe weight loss in 3 months after allogeneic hematopoietic SCT was associated with an increased risk of subsequent non-relapse mortality | Patients after allogeneic hematopoietic SCT (HSCT) are at risk of malnutrition. To assess the impact of malnutrition after allogeneic HSCT on transplant outcomes, we conducted a retrospective study. Adult patients who received allogeneic HSCT from 2000 to 2009 for standard-risk leukemia and achieved disease-free survival up to 3 months after allogeneic HSCT were included. From participating centers, 145 patients were enrolled. Median age was 46 years (19–68). Patients were classified based on weight loss during 3 months after allogeneic HSCT as follows: normal group (weight loss <5%, n=53), mild malnutrition group (5%⩽weight loss<10%, n=47), severe malnutrition group (10% ⩽weight loss, n=45). The cumulative incidences of 2-year nonrelapse mortality (NRM) were 3.8% in the normal group, 8.5% in the mild malnutrition group and 27.3% in the severe malnutrition group. The probabilities of a 2-year OS were 73.2% in the normal group, 74.5% in the mild malnutrition group and 55.3% in the severe malnutrition group. In multivariate analysis, severe malnutrition was associated with an increased risk of NRM and a worse OS. In conclusion, weight loss ⩾10% was associated with a worse clinical outcome. Prospective studies that identify patients at risk of malnutrition and intervention by a nutritional support team are warranted. |
Effectiveness of Couple-Based HIV Counseling and Testing for Women Substance Users and Their Primary Male Partners: A Randomized Trial | A randomized trial was conducted to test the effectiveness of couple-based HIV counseling and testing (CB-HIV-CT) and women-only relationship-focused HIV counseling and testing (WRF-HIV-CT) in reducing HIV risk compared to the National Institute on Drug Abuse HIV-CT standard intervention. Substance using HIV-negative women and their primary heterosexual partner (N = 330 couples) were randomized to 1 of the 3 interventions. Follow-up assessments measuring HIV risk behaviors and other relevant variables were conducted at 3- and 9-months postintervention. Repeated measures generalized linear mixed model analysis was used to assess treatment effects. A significant reduction in HIV risk was observed over the 9-month assessment in the CB-HIV-CT group compared to that of the control group (b = -0.51, t[527] = -3.20, P = 0.002) and compared to that of the WRF-HIV-CT group (b = -0.34, t[527] = -2.07, P = 0.04), but no significant difference was observed between WRF-HIV-CT and controls (b = -0.17, t[527] = -1.09, P = 0.28). A brief couple-based HIV counseling and testing intervention designed to address both drug-related and sexual risk behaviors among substance using women and their primary male partners was shown to be more effective at reducing overall HIV risk compared to a standard HIV-CT intervention in an urban setting. |
DesignCon 2005 Decoupling Capacitance Platform for Substrates , Sockets , and Interposers | As computer system signal speeds continue to rise, the increasing burden on power delivery networks has prompted power integrity to assume a leading focus, along with signal integrity, in system design. Our solution to power integrity is a novel structural integration of decoupling capacitance between the core power nets and ground to enhance core power delivery. This decoupling capacitance replaces the numerous capacitors suboptimally placed on traditional printed circuit boards (PCBs). Lowered power supply noise and increased core power stability result, permitting greater semiconductor switching frequency while reducing overall system cost. Studying actual system applications, we compare this technology to a wide range of expensive and largely ineffective decoupling strategies that have been deployed and continue to be proposed, and demonstrate its superiority in both cost and performance. |
Setting Up Pepper For Autonomous Navigation And Personalized Interaction With Users | In this paper we present our work with the Pepper robot, a service robot from SoftBank Robotics. We had two main goals in this work: improving the autonomy of this robot by increasing its awareness of the environment; and enhance the robot ability to interact with its users. To achieve this goals, we used ROS, a modern open-source framework for developing robotics software, to provide Pepper with state of the art localization and navigation capabilities. Furthermore, we contribute an architecture for effective human interaction based on cloud services. Our architecture improves Pepper speech recognition capabilities by connecting it to the IBM Bluemix Speech Recognition service and enable the robot to recognize its user via an in-house face recognition web-service. We show examples of our successful integration of ROS and IBM services with Pepper’s own software. As a result, we were able to make Pepper move autonomously in a environment with humans and obstacles. We were also able to have Pepper execute spoken commands from known users as well as newlyintroduced users that were enrolled in the robot list of trusted users via a multi-modal interface. |
Emotional intelligence as a standard intelligence. | The authors have claimed that emotional intelligence (EI) meets traditional standards for an intelligence (J. D. Mayer, D. R. Caruso, & P. Salovey, 1999). R. D. Roberts, M. Zeidner, and G. Matthews (2001) questioned whether that claim was warranted. The central issue raised by Roberts et al. concerning Mayer et al. (1999) is whether there are correct answers to questions on tests purporting to measure EI as a set of abilities. To address this issue (and others), the present authors briefly restate their view of intelligence, emotion, and EI. They then present arguments for the reasonableness of measuring EI as an ability, indicate that correct answers exist, and summarize recent data suggesting that such measures are, indeed, reliable. |
Anti-inflammatory activity of Syzygium cumini seed | The Syzygium cumini (Myrtaceae) is a popular traditional medicinal plant in India. This study was intended to evaluate the anti-inflammatory activity of ethyl acetate and methanol extracts of S. cumini seed in carrageenan induced paw oedema in wistar rats at the dose level of 200 and 400 mg/kg administrated orally. Both the extracts exhibited significant anti-inflammatory activity, which supports the traditional medicinal utilization of the plant. This study established anti-inflammatory activity of the seed of S. cumini. |
Cognitive control and lexical access in younger and older bilinguals. | Ninety-six participants, who were younger (20 years) or older (68 years) adults and either monolingual or bilingual, completed tasks assessing working memory, lexical retrieval, and executive control. Younger participants performed most of the tasks better than older participants, confirming the effect of aging on these processes. The effect of language group was different for each type of task: Monolinguals and bilinguals performed similarly on working memory tasks, monolinguals performed better on lexical retrieval tasks, and bilinguals performed better on executive control tasks, with some evidence for larger language group differences in older participants on the executive control tasks. These results replicate findings from individual studies obtained using only 1 type of task and different participants. The confirmation of this pattern in the same participants is discussed in terms of a suggested explanation of how the need to manage 2 language systems leads to these different outcomes for cognitive and linguistic functions. |
A Randomized, Double-Blind, Placebo-Controlled Study to Assess Efficacy and Safety of 0.5 mg and 1 mg Alosetron in Women With Severe Diarrhea-predominant IBS | OBJECTIVE:Alosetron is indicated for women with chronic, severe diarrhea-predominant IBS (d-IBS) who have not responded adequately to conventional therapy. Constipation is the most common adverse event with alosetron treatment. Multiple dosing regimens were assessed in a randomized, double-blind, placebo-controlled study (S3B30040) to determine efficacy, tolerability, and evaluate constipation rate.METHODS:705 women with severe d-IBS were randomized to placebo, alosetron 0.5 mg once daily, 1 mg once daily, or 1 mg twice daily for 12 wk. The primary end point was the proportion of week 12 responders (patients with moderate or substantial improvement in IBS symptoms) on the 7-point Likert Global Improvement Scale (GIS). Secondary end points were average rate of adequate relief of IBS pain and discomfort, and bowel symptom improvements.RESULTS:The proportion of GIS responders at week 12 (primary time point) was significantly greater in all alosetron groups compared with placebo (54/176 [30.7%], 90/177 [50.8%], 84/175 [48%], and 76/177 [42.9%] for placebo, 0.5, 1 mg once daily, and 1 mg twice daily alosetron groups, respectively; P ≤ 0.02). Results were similar for the average adequate relief rate (treatment effects ≥12%, P ≤ 0.038). Bowel symptoms were improved in all alosetron groups. Constipation was the most common adverse event (9%, 16%, and 19% patients in the 0.5 mg, 1 mg once daily, and 1 mg twice daily groups, respectively). One event of intestinal obstruction and one of ischemic colitis occurred in the 0.5 mg group, and one event of fecal impaction occurred in the 1 mg twice-daily group. All were self-limited and resolved without sequelae.CONCLUSION:Alosetron 0.5 mg and 1 mg once daily as well as 1 mg twice daily are effective in providing global improvement in IBS symptoms, adequate relief of IBS pain and discomfort, and improvement in bowel symptoms in women with severe d-IBS. Lower dosing regimens resulted in a decreased constipation rate. |
The Semantics of the C Programming Language | We present formal operational semantics for the C programming language. Our starting point is the ANSI standard for C as described in [KR]. Knowledge of C is not necessary (though it may be helpful) for comprehension, since we explain all relevant aspects of C as we proceed. Our operational semantics is based on evolving algebras. An exposition on evolving algebras can be found in the tutorial [Gu]. In order to make this paper self-contained, we recall the notion of a (sequential) evolving algebra in Sect. 0.1. Our primary concern here is with semantics, not syntax. Consequently, we assume that all syntactic information regarding a given program is available to us at the beginning of the computation (via static functions). We intended to cover all constructs of the C programming language, but not the C standard library functions (e.g. fprintf(), fscanf()). It is not di cult to extend our description of C to include any desired library function or functions. Evolving algebra semantic speci cations may be provided on several abstraction levels for the same language. Having several such algebras is useful, for one can examine the semantics of a particular feature of a programming language at the desired level of abstraction, with unnecessary details omitted. It also makes comprehension easier. We present a series of four evolving algebras, each a re nement of the previous one. The nal algebra describes the C programming language in full detail. Our four algebras focus on the following topics respectively: |
Variability in Treatment of Post-coarctectomy Hypertension: A Multicenter Study | Many pharmacologic therapies are available for treatment of post-coarctectomy hypertension in pediatric patients, which may lead to variability in care. Evaluation of trends in pharmacotherapy is necessary to evaluate quality of care. The Pediatric Health Information System database was queried from 2004 to 2013 for patients >30 days of age who had an ICD-9 code for coarctation of the aorta repair of coarctation by end-to-end anastomosis and had a RACHS-1 score of 2. Patients were excluded if they were admitted for >30 days, underwent mechanical circulatory support, or expired during the admission. Patient demographic and hospital data were collected along with antihypertensive pharmacotherapy. Trends in antihypertensive, analgesic, and sedative pharmacotherapy were evaluated, and multivariable statistical analysis was used to determine variables that significantly influenced cost. A total of 1636 patients [66.6 % male, median age 1.5 years (IQR 0.31–5.3)] met study criteria. Patients received a median of 3 (IQR 2–4) antihypertensive medications for a median of 8 days (IQR 5–11). Intravenous antihypertensive therapy was prescribed for a median 3 days (IQR 2–5) and oral therapy for a median of 1 day (IQR 1–2). Antihypertensive therapy was continued at discharge in 79.8 % of patients. Hospital cost increased by 36 % over the study period (p < 0.01), and nicardipine, dexmedetomidine, and intravenous acetaminophen were most strongly associated with increased cost (p < 0.001). Variability in the pharmacotherapy of post-coarctectomy hypertension in pediatric patients exists, and the use of newer agents may be influencing the cost of care. |
Simultaneous Localisation and Mapping ( SLAM ) Part 2 : State of the Art | This tutorial provides an introduction to the Simultaneous Localisation and Mapping (SLAM) method and the extensive research on SLAM that has been undertaken. Part I of this tutorial described the essential SLAM problem. Part II of this tutorial (this paper) is concerned with recent advances in computational methods and in new formulations of the SLAM problem for large scale and complex |
PUFatt: Embedded platform attestation based on novel processor-based PUFs | Software-based attestation schemes aim at proving the integrity of code and data residing on a platform to a verifying party. However, they do not bind the hardware characteristics to the attestation protocol and are vulnerable to impersonation attacks.
We present PUFatt, a new automatable method for linking software-based attestation to intrinsic device characteristics by means of a novel processor-based Physically Unclonable Function, which enables secure timed (and even) remote attestation particularly suitable for embedded and low-cost devices. Our proof-of-concept implementation on FPGA demonstrates the effectiveness, applicability and practicability of the approach. |
IT 4 BI Master Thesis Representing ETL Flows with BPMN 2 . 0 | Extract, Transform and Load (ETL) processes are widely used in Data Warehousing in order to extract, cleanse and load data into a centralized location for better analysis and decision-making. As users become more demanding for on-line decision making, ETL processes grow large and more complex. Most processes are deployed at the physical level without any abstraction, thus costs of maintenance and efforts for reuse are considerable. Therefore, having logical and conceptual abstractions of ETL processes makes such tasks substantially easier. In this thesis, given a logical ETL representation, we provide an algorithm that automatically translates logical ETL flows into their BPMN representation. To achieve this goal, we create a dictionary that defines simple and composite ETL flow patterns and their corresponding BPMN elements. The pattern dictionary follows a formalized grammar and can be further extended with additional ETL flow patterns. As a result, we can produce conceptual ETL flows in BPMN 2.0 format that can be further edited by the business user. The patterns defined in the dictionary help to move away from technical details and complexity of the ETL flows and make the output model semantics more intuitive and understandable for the business users, as shown during the approach validation. |
Opening the Black Box: Strategies for Increased User Involvement in Existing Algorithm Implementations | An increasing number of interactive visualization tools stress the integration with computational software like MATLAB and R to access a variety of proven algorithms. In many cases, however, the algorithms are used as black boxes that run to completion in isolation which contradicts the needs of interactive data exploration. This paper structures, formalizes, and discusses possibilities to enable user involvement in ongoing computations. Based on a structured characterization of needs regarding intermediate feedback and control, the main contribution is a formalization and comparison of strategies for achieving user involvement for algorithms with different characteristics. In the context of integration, we describe considerations for implementing these strategies either as part of the visualization tool or as part of the algorithm, and we identify requirements and guidelines for the design of algorithmic APIs. To assess the practical applicability, we provide a survey of frequently used algorithm implementations within R regarding the fulfillment of these guidelines. While echoing previous calls for analysis modules which support data exploration more directly, we conclude that a range of pragmatic options for enabling user involvement in ongoing computations exists on both the visualization and algorithm side and should be used. |
Learning Generative End-to-end Dialog Systems with Knowledge | Dialog systems are intelligent agents that can converse with human in natural language and facilitate human. Traditional dialog systems follow a modular approach and often have trouble expanding to new or more complex domains, which hinder the development of more powerful future dialog systems. This dissertation targets at an ambitious goal: to create domainagnostic learning algorithms and dialog models that can continuously learn to converse in any new domains and only requires a small amount of new data for the adaption. Achieving this goal first requires powerful statistic models that are expressive enough to model the natural language and decision-making process in dialogs of many domains; and second requires a learning framework enabling models to share knowledge from previous experience so it can learn to converse in new domains with limited data. End-to-end (E2E) generative dialog models based on encoder-decoder neural networks are strong candidates for the first requirement. The basic idea is to use an encoder network to map a dialog context into a learned distributed representation and then use a decoder network to generate the next system response. These models are not restricted to hand-crafted intermediate states and can in principle generalize to novel system responses that are not observed in the training. However, it is far from trivial to build a full-fledged dialog system using encoder-decoder models. Thus in the first stage of this thesis, we develop a set of novel neural network architectures that offer key properties that are required by dialog modeling. Experiments prove that the resulting system can interact with both users and symbolic knowledge bases, model complex dialog policy and reason over long discourse history. We tackle the second requirement by proposing a novel learning with knowledge (LWK) framework that can adapt the proposed system to new domains with minimum data. Two types of knowledge are studied: 1) domain knowledge from human experts 2) models' knowledge from learning related domains. To incorporate these knowledge, a domain description that can compactly encode domain knowledge is first proposed. Then we develop novel domain-aware models and training algorithms to teach the system learn from data in related domains and generalize to unseen ones. Experiments show the proposed framework is able to achieve strong performance in new domains with limited, even zero, in-domain training data. In conclusion, this dissertation shows that by combing specialized encoderdecoder models with the proposed LWK framework, E2E generative dialog models can be readily applied in complex dialog applications and can be easily expanded to new domains with extremely limited resources, which we believe is an important step towards future general-purpose conversational agents that are more natural and intelligent. November 21, 2017 DRAFT November 21, 2017 DRAFT |
Torque estimation technique of robotic joint with harmonic drive transmission | A joint torque estimation technique utilizing the existing structural elasticity of robotic joints with harmonic drive transmission is proposed in this paper. Joint torque sensing is one of the key techniques for achieving high-performance robot control, especially for robots working in unstructured environments. The proposed joint torque estimation technique uses link-side position measurement along with a proposed harmonic derive model to realize stiff and sensitive torque estimation. The proposed joint torque estimation method has been experimentally studied in comparison with a commercial torque sensor, and the results have attested the effectiveness of the proposed torque estimation technique. |
High-Frame-Rate Synthetic Aperture Ultrasound Imaging Using Mismatched Coded Excitation Waveform Engineering: A Feasibility Study | Mismatched coded excitation (CE) can be employed to increase the frame rate of synthetic aperture ultrasound imaging. The high autocorrelation and low cross correlation (CC) of transmitted signals enables the identification and separation of signal sources at the receiver. Thus, the method provides B-mode imaging with simultaneous transmission from several elements and capability of spatial decoding of the transmitted signals, which makes the imaging process equivalent to consecutive transmissions. Each transmission generates its own image and the combination of all the images results in an image with a high lateral resolution. In this paper, we introduce two different methods for generating multiple mismatched CEs with an identical frequency bandwidth and code length. Therefore, the proposed families of mismatched CEs are able to generate similar resolutions and signal-to-noise ratios. The application of these methods is demonstrated experimentally. Furthermore, several techniques are suggested that can be used to reduce the CC between the mismatched codes. |
Formaldehyde exposure of medical students and instructors and clinical symptoms during gross anatomy laboratory in Thammasat University. | To study formaldehyde concentrations in the breathing zone and symptoms induced by gaseous formaldehyde exposure of medical students and instructors during gross anatomy laboratory at faculty of Medicine, Thammasat university. Formaldehyde concentrations in the indoor air and breathing zone of medical students were measured during the cadaver dissection. Formaldehyde concentrations in the indoor air and in the breathing zone were ranged from 0.401 to 0.581 ppm (mean 0.491 +/- 0.090) and from 0.472 to 0.848 ppm (mean 0.660 +/- 0.188) respectively. The mean of formaldehyde concentrations in the breathing zone of medical students and instructors was significantly higher than the mean of formaldehyde concentration in indoor air (p < 0.05). The most symptoms were general fatigue (82.7-87.8%), burning eyes (66.2-85.0%) and burning nose (62.5-81.1%). There was no statistically significant difference in burning eye symptom between contact lenses users and no contact lenses users (p > 0.05). Even though formaldehyde concentrations were relatively low, medical students, instructors and cadaver related workers should wear personal protective devices to reduce the effect of gaseous formaldehyde exposure during gross anatomy laboratory or contact cadaver. |
Coherent Online Video Style Transfer | Training a feed-forward network for the fast neural style transfer of images has proven successful, but the naive extension of processing videos frame by frame is prone to producing flickering results. We propose the first end-toend network for online video style transfer, which generates temporally coherent stylized video sequences in near realtime. Two key ideas include an efficient network by incorporating short-term coherence, and propagating short-term coherence to long-term, which ensures consistency over a longer period of time. Our network can incorporate different image stylization networks and clearly outperforms the per-frame baseline both qualitatively and quantitatively. Moreover, it can achieve visually comparable coherence to optimization-based video style transfer, but is three orders of magnitude faster. |
In vivo validation of a catheter-based near-infrared spectroscopy system for detection of lipid core coronary plaques: initial results of the SPECTACL study. | OBJECTIVES
To determine whether catheter-based near-infrared spectroscopy (NIRS) signals obtained with a novel catheter-based system from coronaries of patients are similar to those from autopsy specimens and to assess initial safety of NIRS device.
BACKGROUND
An intravascular NIRS system for detection of lipid core-containing plaques (LCP) has been validated in human coronary autopsy specimens. The SPECTACL (SPECTroscopic Assessment of Coronary Lipid) trial was a parallel first-in-human multicenter study designed to demonstrate the applicability of the LCP detection algorithm in living patients.
METHODS
Intracoronary NIRS was performed in patients undergoing percutaneous coronary intervention. Acquired spectra were blindly compared with autopsy NIRS signals with multivariate statistics. To meet the end point of spectral similarity, at least two-thirds of the scans were required to have >80% of spectra similar to the autopsy spectra.
RESULTS
A total of 106 patients were enrolled; there were no serious adverse events attributed to NIRS. Spectroscopic data could not be obtained in 17 (16%) patients due to technical limitations, leaving 89 patients for analysis. Spectra from 30 patients were unblinded to test the calibration of the LCP detection algorithm. Of the remaining 59 blinded cases, after excluding 11 due to inadequate data, spectral similarity was demonstrated in 40 of 48 spectrally adequate scans (83% success rate, 95% confidence interval: 70% to 93%, median spectral similarity/pullback: 96%, interquartile range 10%). The LCP was detected in 58% of 60 spectrally similar scans from both cohorts.
CONCLUSIONS
This intravascular NIRS system safely obtained spectral data in patients that were similar to those from autopsy specimens. These results demonstrate the feasibility of invasive detection of coronary LCP with this novel system. (SPECTACL: SPECTroscopic Assessment of Coronary Lipid; NCT00330928). |
Learning From Japan About The Nurturance Gap In America$fr:1$f:0034–6764/94/0301 -2/$1.50/0$ef: | If we confine attention to the inner dynamics of advanced monopoly capitalism, it is hard to avoid the conclusion that the prospect … would be the spread of increasingly severe psychic disorders leading to the impairment and eventual breakdown of the system's ability to function even on its own terms — Paul Baran and Paul Sweezy Pareto optimality is faint praise indeed — A. K. Sen |
Characterizing and modeling internet traffic dynamics of cellular devices | Understanding Internet traffic dynamics in large cellular networks is important for network design, troubleshooting, performance evaluation, and optimization. In this paper, we present the results from our study, which is based upon a week-long aggregated flow level mobile device traffic data collected from a major cellular operator's core network. In this study, we measure and characterize the spatial and temporal dynamics of mobile Internet traffic. We distinguish our study from other related work by conducting the measurement at a larger scale and exploring mobile data traffic patterns along two new dimensions -- device types and applications that generate such traffic patterns. Based on the findings of our measurement analysis, we propose a Zipf-like model to capture the volume distribution of application traffic and a Markov model to capture the volume dynamics of aggregate Internet traffic. We further customize our models for different device types using an unsupervised clustering algorithm to improve prediction accuracy. |
Maximum likelihood thresholding based on population mixture models | -Maximum likelihood thresholding methods are presented on the basis of population mixture models. It turns out that the standard thresholding proposed by Otsu, which is based on a discriminant criterion and also minimizes the mean square errors between the original image and the resultant binary image, is equivalent to the maximization of the likelihood of the conditional distribution in the population mixture model under the assumption of normal distributions with a common variance. It is also shown that Kittler and Illingworth's thresholding, which minimizes a criterion related to the average classification error rate assuming normal distribution with different variances, is equivalent to the maximization of the likelihood of the joint distribution in the population mixture model. A multi-thresholding algorithm based on Dynamic Programming is also presented. Thresholding Maximum likelihood Mixture models Dynamic programming |
High-Efficiency ClassE Power Amplifier for Wireless Power Transfer System | This paper presents a single-ended Class-E power amplifier for wireless power transfer systems. The power amplifier is designed with a low-cost power MOSFET and high-Q inductor. It adopts a second harmonic filter in the output matching network. The proposed Class-E power amplifier has low second harmonic level by the second harmonic filter. Also, we designed an input driver with a single supply voltage for driving the Class-E power amplifier. The implemented Class-E power amplifier delivers an output power of 40.8 dBm and a high-efficiency of 90.3% for the 6.78 MHz input signal. Index Terms — Class-E power amplifier, High efficiency amplifier, wireless power transfer, harmonic filter |
Task-oriented Word Embedding for Text Classification | Distributed word representation plays a pivotal role in various natural language processing tasks. In spite of its success, most existing methods only consider contextual information, which is suboptimal when used in various tasks due to a lack of task-specific features. The rational word embeddings should have the ability to capture both the semantic features and task-specific features of words. In this paper, we propose a task-oriented word embedding method and apply it to the text classification task. With the function-aware component, our method regularizes the distribution of words to enable the embedding space to have a clear classification boundary. We evaluate our method using five text classification datasets. The experiment results show that our method significantly outperforms the state-of-the-art methods. |
The effects of testosterone treatment on body composition and metabolism in middle-aged obese men. | Twenty-three middle-aged abdominally obese men were treated for eight months with testosterone or with placebo. Testosterone treatment was followed by a decrease of visceral fat mass, measured by computerized tomography, without a change in body mass, subcutaneous fat mass or lean body mass. Insulin resistance, measured by the euglycemic/hyperinsulinemic glucose clamp method, improved and blood glucose, diastolic blood pressure and serum cholesterol decreased with testosterone treatment. A small increase in prostate volume was noted, but serum prostate specific antigen concentrations were unchanged and no adverse functional side-effects were found. Insulin sensitivity improved more in men with relatively low testosterone values at the outset. The mechanisms involved in these changes might act either via effects on visceral fat accumulation, followed by metabolic improvements, and/or via direct effects on muscle insulin sensitivity, as suggested by results of other recent studies. It is concluded that testosterone treatment of middle-aged abdominally obese men gives beneficial effects on well-being and the cardiovascular and diabetes risk profile, results similar to those observed after hormonal replacement therapy in postmenopausal women. |
Knowledge, Attitudes, and Willingness Toward Organ Donation Among Health Professionals in China. | BACKGROUND
The purposes of this study were to assess the knowledge, attitudes, and willingness toward organ donation among the health professionals in China.
METHODS
Questionnaires were delivered to 400 health professionals from 7 hospitals in Dalian and 1 hospital in Chaozhou of China between October 2013 and January 2014.
RESULTS
In all, 400 health professionals were approached, 373 valid responses were returned. Over 90% of the participants knew about organ donation, but only 17.4% had taken part in some training courses or lectures about organ donation. Health professionals (64.9%) knew the shortage status of organ, and doctors knew more than nurses and nonclinical staffs (P < 0.01). Health professionals (97.3%) knew brain death, and 68.9% professionals thought brain death was the reasonable criteria to judge death. Doctors showed a higher knowledge level about brain death than nurses and nonclinical staffs (P < 0.01). Altogether, 60.1% approved deceased donation; however, only 48.5% approved living donation. Doctors' attitudes were more positive than nurses and nonclinical both in deceased donation (P < 0.01) and in living donation (P < 0.05). In all, 49.3% were willing to donate their own organs postmortem, and doctors had higher willingness to donation postmortem compared with nurses and nonclinical staffs (P < 0.01). The most (49.2%) commonly cited reason for refraining from donation was: "afraid that organs would be picked up inhumanely and body would be disfigured".
CONCLUSIONS
Health professionals showed lower favorable attitudes and willingness toward organ donation than Chinese general public. A proportion of Chinese health professionals' knowledge about organ donation was limited. |
THE SOCIAL PSYCHOLOGY OF FALSE CONFESSIONS : Compliance , Internalization , and Confabulation | An experiment demonstrated that false incriminating evidence can lead people to accept guilt for a crime they did not commit. Subjects in a fastor slow-paced reaction time task were accused of damaging a computer by pressing the wrong key. All were truly innocent and initially denied the charge. A confederate then said she saw the subject hit the key or did not see the subject hit the key. Compared with subjects in the slowpacelno-witness group, those in the fast-pace/witness group were more likely to sign a confession, internalize guilt for the event, and confabulate details in memory consistent with that belief Both legal and conceptual implications are discussed. In criminal law, confession evidence is a potent weapon for the prosecution and a recurring source of controversy. Whether a suspect's self-incriminating statement was voluntary or coerced and whether a suspect was of sound mind are just two of the issues that trial judges and juries consider on a routine basis. To guard citizens against violations of due process and to minimize the risk that the innocent would confess to crimes they did not commit, the courts have erected guidelines for the admissibility of confession evidence. Although there is no simple litmus test, confessions are typically excluded from triai if elicited by physical violence, a threat of harm or punishment, or a promise of immunity or leniency, or without the suspect being notified of his or her Miranda rights. To understand the psychology of criminal confessions, three questions need to be addressed: First, how do police interrogators elicit self-incriminating statements (i.e., what means of social influence do they use)? Second, what effects do these methods have (i.e., do innocent suspects ever confess to crimes they did not commit)? Third, when a coerced confession is retracted and later presented at trial, do juries sufficiently discount the evidence in accordance with the law? General reviews of relevant case law and research are available elsewhere (Gudjonsson, 1992; Wrightsman & Kassin, 1993). The present research addresses the first two questions. Informed by developments in case law, the police use various methods of interrogation—including the presentation of false evidence (e.g., fake polygraph, fingerprints, or other forensic test results; staged eyewitness identifications), appeals to God and religion, feigned friendship, and the use of prison informants. A number of manuals are available to advise detectives on how to extract confessions from reluctant crime suspects (Aubry & Caputo, 1965; O'Hara & O'Hara, 1981). The most popular manual is Inbau, Reid, and Buckley's (1986) Criminal Interrogation and Confessions, originally published in 1%2, and now in its third edition. Address correspondence to Saul Kassin, Department of Psychology, Williams College, WllUamstown, MA 01267. After advising interrogators to set aside a bare, soundproof room absent of social support and distraction, Inbau et al, (1986) describe in detail a nine-step procedure consisting of various specific ploys. In general, two types of approaches can be distinguished. One is minimization, a technique in which the detective lulls Che suspect into a false sense of security by providing face-saving excuses, citing mitigating circumstances, blaming the victim, and underplaying the charges. The second approach is one of maximization, in which the interrogator uses scare tactics by exaggerating or falsifying the characterization of evidence, the seriousness of the offense, and the magnitude of the charges. In a recent study (Kassin & McNall, 1991), subjects read interrogation transcripts in which these ploys were used and estimated the severity of the sentence likely to be received. The results indicated that minimization communicated an implicit offer of leniency, comparable to that estimated in an explicit-promise condition, whereas maximization implied a threat of harsh punishment, comparable to that found in an explicit-threat condition. Yet although American courts routinely exclude confessions elicited by explicit threats and promises, they admit those produced by contingencies that are pragmatically implied. Although police often use coercive methods of interrogation, research suggests that juries are prone to convict defendants who confess in these situations. In the case of Arizona v. Fulminante (1991), the U.S. Supreme Court ruled that under certain conditions, an improperly admitted coerced confession may be considered upon appeal to have been nonprejudicial, or "harmless error." Yet mock-jury research shows that people find it hard to believe that anyone would confess to a crime that he or she did not commit (Kassin & Wrightsman, 1980, 1981; Sukel & Kassin, 1994). Still, it happens. One cannot estimate the prevalence of the problem, which has never been systematically examined, but there are numerous documented instances on record (Bedau & Radelet, 1987; Borchard, 1932; Rattner, 1988). Indeed, one can distinguish three types of false confession (Kassin & Wrightsman, 1985): voluntary (in which a subject confesses in the absence of extemal pressure), coercedcompliant (in which a suspect confesses only to escape an aversive interrogation, secure a promised benefit, or avoid a threatened harm), and coerced-internalized (in which a suspect actually comes to believe that he or she is guilty of the crime). This last type of false confession seems most unlikely, but a number of recent cases have come to light in which the police had seized a suspect who was vulnerable (by virtue of his or her youth, intelligence, personality, stress, or mental state) and used false evidence to convince the beleaguered suspect that he or she was guilty. In one case that received a great deal of attention, for example, Paul Ingram was charged with rape and a host of Satanic cult crimes that included the slaughter of newbom babies. During 6 months of interrogation, he was hypnoVOL. 7, NO. 3, MAY 1996 Copyright © 1996 American Psychological Society 125 PSYCHOLOGICAL SCIENCE |
Risk Factors for Lymphatic Metastasis of Malignant Bone and Soft-Tissue Tumors: A Retrospective Cohort Study of 242 Patients | Metastasis to the lymph nodes is relatively rare in malignant bone and soft-tissue tumors, and its risk factors remains unknown, except for tumors of the lymphogenous histotype, including rhabdomyosarcoma, epithelioid sarcoma, and clear cell sarcoma. The purpose of this study was to identify the risk factors for lymph node metastasis of malignant bone and soft-tissue tumors. We retrospectively reviewed 242 patients with malignant bone and soft-tissue tumors. The predictors of interest for the risk of lymph node metastasis included age, sex, histopathological diagnosis, location(s) of the primary tumor(s), local recurrence, residual tumor(s), and the size of the primary tumors. To identify the risk factors for lymph node metastasis, Cox regression analyses were performed. Among the 242 patients with malignant bone and soft-tissue tumors in the current study, 60, 29, and 28 were detected to have lung, lymph node, and bone metastases, respectively. In the univariate analyses, the lymphogenous histotype and a primary tumor invading the subcutis were the risk factors for lymph node metastasis. In the multivariate analysis, the lymphogenous histotype (P < 0.01) and a primary tumor in the subcutis (P < 0.01) remained significantly associated with a higher risk of lymph node metastasis with 5.15 and 3.48 of hazard ratios, respectively. Lymph node metastasis was detected in malignant bone and soft-tissue tumors more frequently than that has been previously reported, and the risk factors for lymph node metastasis were the lymphogenous histotype and primary tumors invading the subcutis. |
Ambulatory photodynamic therapy using low irradiance inorganic light-emitting diodes for the treatment of non-melanoma skin cancer: an open study. | BACKGROUND/PURPOSE
Conventional photodynamic therapy (PDT) can be inconvenient and uncomfortable. We studied low irradiance PDT using an ambulatory inorganic light-emitting diode.
METHODS
Fifty-three patients with 61 lesions [superficial basal cell carcinoma (n = 30), Bowen's disease (n = 30), and actinic keratosis (AK; n = 1)] were studied. Two treatments of ambulatory PDT were undertaken 1 week apart (one treatment for AK). Clinical response was determined at 3 months, and the treatment cycle was repeated if there was residual disease. The endpoints assessed were pain during treatment (numerical rating scale (NRS); 0-10) and outcome at 1 year. Twenty-three of these patients also received conventional PDT to separate lesions.
RESULTS
The median NRS pain scores during first and second treatment were 2 (range 0-9) and 4 (0-9), respectively. Lesion clearance rate at 1 year after ambulatory PDT was 84% (21/25 lesions in 22 patients). Of the twenty-three patients treated with both ambulatory and conventional PDT, the median NRS was 1 (0-7) and 5 (1.5-9), respectively, with most patients preferring ambulatory PDT.
CONCLUSION
Ambulatory PDT is effective for superficial non-melanoma skin cancer, with 1 year clearance rates comparative to conventional PDT. Low irradiance ambulatory PDT may be less painful and more convenient than conventional PDT. |
Final results of a phase II study of nab-paclitaxel, bevacizumab, and gemcitabine as first-line therapy for patients with HER2-negative metastatic breast cancer | In order to examine the efficacy and safety of nanoparticle albumin-bound paclitaxel (nab-P) in combination with bevacizumab (B) and gemcitabine (G) for the first-line treatment of patients with HER2-negative metastatic breast cancer (MBC). In this single-center, open-label phase II trial, patients with HER2-negative MBC received gemcitabine 1500 mg/m2, nab-paclitaxel 150 mg/m2, and bevacizumab 10 mg/kg (each administered intravenously) on days 1 and 15 of a 28-day cycle. The primary end point was progression free survival (PFS); secondary end points were overall response rate (ORR), complete (CR) and partial (PR) response rates, clinical benefit (ORR + stable disease), overall survival (OS), and safety. Thirty patients were enrolled. One patient was ineligible and was not included in analysis. Median PFS was 10.4 months (95% CI: 5.6–15.2 months). ORR was 75.9%, comprising eight (27.6%) CRs and 14 (48.3%) PRs; five patients had stable disease (SD) and two patients (6.9%) had progressive disease (PD) as their best response. The clinical benefit rate was 93.1% (27/29) in the overall group and 84.6% in the triple-negative cohort (11/13). The 18-month survival rate was 77.2% (95% CI: 51.1–90.5%). Eight (27.6%) patients experienced grade 3 or 4 toxicity: grade 4 neutropenic fever (n = 1) and grade 3 infection (n = 6), leukopenia, thrombocytopenia, peripheral neuropathy, seizure, shortness of breath, hematuria, and cardiac tamponade (one each). First-line therapy with nab-P, B, and G demonstrated a median PFS of 10.4 months and a 75.9% ORR with acceptable toxicity; this novel combination warrants investigation in a randomized study. |
Traffic Sign Classification Using Deep Inception Based Convolutional Networks | In this work, we propose a novel deep network for traffic sign classification that achieves outstanding performance on GTSRB surpassing all previous methods. Our deep network consists of spatial transformer layers and a modified version of inception module specifically designed for capturing local and global features together. This features adoption allows our network to classify precisely intraclass samples even under deformations. Use of spatial transformer layer makes this network more robust to deformations such as translation, rotation, scaling of input images. Unlike existing approaches that are developed with hand-crafted features, multiple deep networks with huge parameters and data augmentations, our method addresses the concern of exploding parameters and augmentations. We have achieved the state-of-the-art performance of 99.81% on GTSRB dataset. |
QoE-Based Cross-Layer Optimization of Wireless Video with Unperceivable Temporal Video Quality Fluctuation | This paper proposes a novel approach for Quality of Experience (QoE) driven cross-layer optimization for wireless video transmission. We formulate the cross-layer optimization problem with a constraint on the temporal fluctuation of the video quality. Our objective is to minimize the temporal change of the video quality as perceivable quality fluctuations negatively affect the overall quality of experience. The proposed QoE scheme jointly optimizes the application layer and the lower layers of a wireless protocol stack. It allocates network resources and performs rate adaptation such that the fluctuations lie within the range of unperceivable changes. We determine corresponding perception thresholds via extensive subjective tests and evaluate the proposed scheme using an OPNET High Speed Downlink Packet Access (HSDPA) emulator. Our simulation results show that the proposed approach leads to a noticeable improvement of overall user satisfaction for the provided video delivery service when compared to state-of-the-art approaches. |
The effect of isometric exercise of the hand on the synovial blood flow in patients with rheumatoid arthritis measured by color Doppler ultrasound | In 90% of patients with rheumatoid arthritis (RA), the joints of the hand are affected. Studies of grip strength training have not indicated a negative effect on disease activity after training. Introduction of ultrasound Doppler (USD) to measure increased blood flow induced by inflammation has made it possible to investigate the direct effect on blood supply in the synovium after training. In this case–control study, 24 patients with RA with USD activity in the wrist joint participated. The USD activity was measured by the color fraction (CF) (CF = colored pixels/total number of pixels in ROI). Twenty-four patients were assigned to an 8-week grip strength training program. At baseline and after 8 weeks of training, an USD examination of the wrist joint was performed. In the training group, we measured grip strength and pain in the wrist joint. Six patients withdrew from the training because of pain or change in medication. Eighteen patients served as control group. There was a modest, not significant, decrease in the CF in response to training (1.86%; P = 0.08). Grip strength increased 8.8% after training (P = 0.055). Pain in motion deceased after training (P = 0.04). No difference in the CF was seen between the training and control groups, neither at baseline nor at follow-up (P = 0.82 and P = 0.48). Patients withdrawing from training had a significantly higher CF than the other patients (P > 0.001). The results in this study might indicate that the flow in the synovium assessed by USD is not affected by grip strength training. |
Music Generation from Statistical Models | This paper discusses the use of statistical models for the problem of musical style imitation. Statistical models are created from extant pieces in a stylistic corpus, and have an objective goal which is to accurately classify new pieces. The process of music generation is equated with the problem of sampling from a statistical model. In principle there is no need to make the classical distinction between analytic and synthetic models of music. This paper presents several methods for sampling from an analytic statistical model, and proposes a new approach that maintains the intra opus pattern repetition within an extant piece. A major component of creativity is the adaptation of extant art works, and this is also an efficient way to sample pieces from complex statistical models. |
Perceived barriers to effective knowledge sharing in agile software teams | While the literature offers several frameworks that explain barriers to knowledge sharing within software development teams, little is known about differences in how team members perceive these barriers. Based on an in-depth multi-case study of four software projects, we investigate how project managers, developers, testers, and user representatives think about barriers to effective knowledge sharing in agile development. Adapting comparative causal modeling (CCM), we constructed causal maps for each of the four roles and identified overlap and divergence in map constructs and causal linkages. The results indicate that despite certain similarities, the four roles differ in how they perceive and emphasize knowledge sharing barriers. The project managers put primary emphasis on project setting barriers, while the primary concern of developers, testers, and user representatives were project communication, project organization, and team capabilities barriers respectively. Integrating the four causal maps and the agile literature, we propose a conceptual framework with seven types of knowledge sharing barriers and 37 specific barriers. We argue that to bridge communication gaps and create shared understanding in software teams, it is critical to take the revealed concerns of different roles into account. We conclude by discussing our findings in relation to knowledge sharing in agile teams and software teams more generally. |
Refactoring for generalization using type constraints | Refactoring is the process of applying behavior-preserving transformations (called "refactorings") in order to improve a program's design. Associated with a refactoring is a set of preconditions that must be satisfied to guarantee that program behavior is preserved, and a set of source code modifications. An important category of refactorings is concerned with generalization (e.g., Extract Interface for re-routing the access to a class via a newly created interface, and Pull Up Members for moving members into a superclass). For these refactorings, both the preconditions and the set of allowable source code modifications depend on interprocedural relationships between types of variables. We present an approach in which type constraints are used to verify the preconditions and to determine the allowable source code modifications for a number of generalization-related refactorings. This work is implemented in the standard distribution of Eclipse (see www.eclipse.org). |
Tunable metamaterials for optimization of wireless power transfer systems | Metamaterials (MMs) have been proposed to improve the performance of wireless power transfer (WPT) systems. The performance of identical unit cells having the same transmitter and receiver self-resonance is presented in the literature. This paper presents the optimization of tunable MM for performance improvement in WPT systems. Furthermore, a figure of merit (FOM) is proposed for the optimization of WPT systems with MMs. It is found that both transferred power and power transfer efficiency can be improved significantly by using the proposed FOM and tunable MM, particularly under misaligned conditions. |
Common genetic polymorphisms of microRNA biogenesis pathway genes and breast cancer survival | Although the role of microRNA’s (miRNA’s) biogenesis pathway genes in cancer development and progression has been well established, the association between genetic variants of this pathway genes and breast cancer survival is still unknown. We used genotype data available from a previously conducted case–control study to investigate association between common genetic variations in miRNA biogenesis pathway genes and breast cancer survival. We investigated the possible associations between 41 germ-line single-nucleotide polymorphisms (SNPs) and both disease free survival (DFS) and overall survival (OS) among 488 breast cancer patients. During the median follow-up of 6.24 years, 90 cases developed disease progression and 48 cases died. Seven SNPs were significantly associated with breast cancer survival. Two SNPs in AGO2 (rs11786030 and rs2292779) and DICER1 rs1057035 were associated with both DFS and OS. Two SNPs in HIWI (rs4759659 and rs11060845) and DGCR8 rs9606250 were associated with DFS, while DROSHA rs874332 and GEMIN4 rs4968104 were associated with only OS. The most significant association was observed in variant allele of AGO2 rs11786030 with 2.62-fold increased risk of disease progression (95% confidence interval (CI), 1.41-4.88) and in minor allele homozygote of AGO2 rs2292779 with 2.94-fold increased risk of death (95% CI, 1.52-5.69). We also found cumulative effects of SNPs on DFS and OS. Compared to the subjects carrying 0 to 2 high-risk genotypes, those carrying 3 or 4–6 high-risk genotypes had an increased risk of disease progression with a hazard ratio of 2.16 (95% CI, 1.18- 3.93) and 4.47 (95% CI, 2.45- 8.14), respectively (P for trend, 6.11E-07). Our results suggest that genetic variants in miRNA biogenesis pathway genes may be associated with breast cancer survival. Further studies in larger sample size and functional characterizations are warranted to validate these results. |
A machine learning approach for linux malware detection | The increasing number of malware is becoming a serious threat to the private data as well as to the expensive computer resources. Linux is a Unix based machine and gained popularity in recent years. The malware attack targeting Linux has been increased recently and the existing malware detection methods are insufficient to detect malware efficiently. We are introducing a novel approach using machine learning for identifying malicious Executable Linkable Files. The system calls are extracted dynamically using system call tracer Strace. In this approach we identified best feature set of benign and malware specimens to built classification model that can classify malware and benign efficiently. The experimental results are promising which depict a classification accuracy of 97% to identify malicious samples. |
Towards Know-how Mapping Using Goal Modeling | In organizing the knowledge in a field of study, it is common to use classification techniques to organize concepts and approaches along dimensions of interest. In technology domains, an advance often appears in the form of a new way or method for achieving an objective. This paper proposes to use goal modeling to map the means-ends knowledge (“know-how”) in a domain. A know-how map highlights the structure of recognized problems and known solutions in the domain, thus facilitating gap identification and prompting new research and innovation. We contrast the proposed goal-oriented approach with a claim-oriented approach, using Web Page Ranking as a sample domain. |
A comparison of accuracy of fall detection algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals from a comprehensive set of falls and non-fall trials | Falls are the leading cause of injury-related morbidity and mortality among older adults. Over 90 % of hip and wrist fractures and 60 % of traumatic brain injuries in older adults are due to falls. Another serious consequence of falls among older adults is the ‘long lie’ experienced by individuals who are unable to get up and remain on the ground for an extended period of time after a fall. Considerable research has been conducted over the past decade on the design of wearable sensor systems that can automatically detect falls and send an alert to care providers to reduce the frequency and severity of long lies. While most systems described to date incorporate threshold-based algorithms, machine learning algorithms may offer increased accuracy in detecting falls. In the current study, we compared the accuracy of these two approaches in detecting falls by conducting a comprehensive set of falling experiments with 10 young participants. Participants wore waist-mounted tri-axial accelerometers and simulated the most common causes of falls observed in older adults, along with near-falls and activities of daily living. The overall performance of five machine learning algorithms was greater than the performance of five threshold-based algorithms described in the literature, with support vector machines providing the highest combination of sensitivity and specificity. |
Reduction of the concentration and total amount of keratan sulphate in synovial fluid from patients with osteoarthritis during treatment with piroxicam. | To study the effects of piroxicam on cartilage metabolism in vivo, a three phase (placebo/piroxicam 20 mg/day by mouth/placebo) double blind controlled trial was conducted in patients with osteoarthritis of the knee joint. Twenty one patients were recruited, 19 of whom (11 women, eight men, median age 70 years) completed the treatment schedule. The knee joint under study was aspirated to dryness at four week intervals. Treatment with piroxicam was accompanied by a decrease in the pain score, an improvement in the functional index, and an increased range of movement. Reductions in the concentration (mean (SEM) 120 (6) to 110 (8) micrograms/ml) and the total amount (1.22 (0.34) to 0.99 (0.37) mg) of keratan sulphate, but not the effusion volume (9.4 (2.5) to 8.3 (2.6) ml) were observed during treatment with piroxicam. These findings are consistent with decreased proteoglycan catabolism during treatment with piroxicam. Neither depressed synthesis nor enhanced clearance of degraded proteoglycan fragments can be excluded, however. |
The Design and Application of RFID Tag System for Logistical Unit | At present, RFID is used in some practical fields such as supply chain management, stock management, and so on. However, for the cost reason, the RFID tags are not suitable for individual items, especially for lower-priced goods. In this paper, firstly, the features of logistical unit in the distribution are characterized, and its information is abstracted into a "tree" data structure. Secondly the logistical unit with the tags according with ISO-15693 is identified by using the SGTIN-96 and SSCC-96. Based on this, a scheme using the mix of RFID and barcode to reduce the cost is further proposed. Lastly, a tag operation module (TOM), which maintenances the data saved in the tag by establishing the index data in it, is introduced. According to the test, the effective operations of the RFID tag can be realized via TOM. |
Recommendations for the Evaluation of Left Ventricular Diastolic Function by Echocardiography: An Update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging. | Sherif F. Nagueh, Chair, MD, FASEa1, Otto A. Smiseth, Co-Chair, MD, PhDb2, Christopher P. Appleton, MDc1, Benjamin F. Byrd III, MD, FASEd1, Hisham Dokainish, MD, FASEe1, Thor Edvardsen, MD, PhDb2, Frank A. Flachskampf, MD, PhD, FESCf2, Thierry C. Gillebert, MD, PhD, FESCg2, Allan L. Klein, MD, FASEh1, Patrizio Lancellotti, MD, PhD, FESCi2, Paolo Marino, MD, FESCj2, Jae K. Oh, MDk1, Bogdan Alexandru Popescu, MD, PhD, FESC, FASEl2, and Alan D. Waggoner, MHS, RDCSm1, Houston, Texas; Oslo, Norway; Phoenix, Arizona; Nashville, Tennessee; Hamilton, Ontario, Canada; Uppsala, Sweden; Ghent and Liège, Belgium; Cleveland, Ohio; Novara, Italy; Rochester, Minnesota; Bucharest, Romania; and St. Louis, Missouri |
Identification of proteins associated to multi-drug resistance in LoVo human colon cancer cells. | Multi-drug resistance (MDR) limits the effectiveness of chemotherapy. P-glycoprotein encoded by the MDR1 gene, is known to be implicated in MDR phenotype, but other factors could be determinant in MDR. The aim of this study was to investigate new molecular factors potentially associated with the MDR phenotype using a proteomic approach. Two dimensional fluorescence difference gel electrophoresis (2D-DIGE) and MALDI-TOF peptide mass fingerprinting were used to determine differentially expressed proteins between LoVo human colon carcinoma cell line and one of its MDR sublines (LoVo-R1). Thirty-four differentially expressed proteins were identified. They were classified into five groups based on their biological functions: i) proteins involved in energy request pathways, ii) in detoxification pathways, iii) in cell survival activity, iv) in drug transport and v) in chaperone functions. Among these proteins, endothelin 1 and proteasome subunit beta2 regulations were validated by immunofluorescence and Western blotting, respectively, showing complete consistency with 2D-DIGE results. In conclusion, the proteomic approach indicates that multiple mechanisms are simultaneously involved in MDR. These might be useful in the search for new forms of interventional therapeutic approaches for MDR reversal. |
Power minimization in IC design: principles and applications | Low power has emerged as a principal theme in today's electronics industry. The need for low power has caused a major paradigm shift in which power dissipation is as important as performance and area. This article presents an in-depth survey of CAD methodologies and techniques for designing low power digital CMOS circuits and systems and describes the many issues facing designers at architectural, logical, and physical levels of design abstraction. It reviews some of the techniques and tools that have been proposed to overcome these difficulties and outlines the future challenges that must be met to design low power, high performance systems. |
Hop rho iso-alpha acids, berberine, vitamin D3 and vitamin K1 favorably impact biomarkers of bone turnover in postmenopausal women in a 14-week trial | Osteoporosis is a major health issue facing postmenopausal women. Increased production of pro-inflammatory cytokines resulting from declining estrogen leads to increased bone resorption. Nutrition can have a positive impact on osteoporosis prevention and amelioration. The objective of this study was to investigate the impact of targeted phytochemicals and nutrients essential for bone health on bone turnover markers in healthy postmenopausal women. In this 14-week, single-blinded, 2-arm placebo-controlled pilot study, all women were instructed to consume a modified Mediterranean-style low-glycemic-load diet and to engage in limited aerobic exercise; 17 randomized to the placebo and 16 to the treatment arm (receiving 200 mg hop rho iso-alpha acids, 100 mg berberine sulfate trihydrate, 500 IU vitamin D3 and 500 μg vitamin K1, twice daily). Thirty-two women completed the study. Baseline nutrient intake did not differ between arms. At 14 weeks, the treatment arm exhibited an estimated 31% mean reduction (P = 0.02) in serum osteocalcin (a marker of bone turnover), whereas the placebo arm exhibited a 19% increase (P = 0.03) compared to baseline. Serum 25-hydroxyvitamin D (25(OH)D) increased by 13% (P = 0.24) in the treatment arm and decreased by 25% (P < 0.01) in the placebo arm. The between-arm differences for OC and 25(OH)D were statistically significant. Serum IGF-I was increased in both arms, but the increase was more significant in the treatment arm at 14 weeks (P < 0.01). Treatment with hop rho iso-alpha acids, berberine sulfate trihydrate, vitamin D3 and vitamin K1 produced a more favorable bone biomarker profile that supports a healthy bone metabolism. |
Decision making under ambiguity but not under risk is related to problem gambling severity | The aim of the present study was to examine the relationship between problem gambling severity and decision-making situations that vary in two degrees of uncertainty (probability of outcome is known: decision-making under risk; probability of outcome is unknown: decision-making under ambiguity). For this purpose, we recruited 65 gamblers differing in problem gambling severity and 35 normal controls. Decision-making under ambiguity was assessed with the Iowa Gambling Task (IGT) and the Card Playing Task (CPT). Decision-making under risk was assessed with the Coin Flipping Task (CFT) and the Cups Task. In addition, we included an examination of two working memory components (verbal storage and dual tasking). Results show that problem gamblers performed worse than normal controls on both ambiguous and risky decision-making. Higher problem gambling severity scores were associated with poorer performance on ambiguous decision-making tasks (IGT and CPT) but not decision-making under risk. Additionally, we found that dual task performance correlated positively with decision-making under risk (CFT and Cups tasks) but not with decision-making under ambiguity (IGT and CPT). These results suggest that impairments in decision-making under uncertain conditions of problem gamblers may represent an important neurocognitive mechanism in the maintenance of their problem gambling. |
Distributed Learning in Multi-Armed Bandit With Multiple Players | We formulate and study a decentralized multi-armed bandit (MAB) problem. There are M distributed players competing for N independent arms. Each arm, when played, offers i.i.d. reward according to a distribution with an unknown parameter. At each time, each player chooses one arm to play without exchanging observations or any information with other players. Players choosing the same arm collide, and, depending on the collision model, either no one receives reward or the colliding players share the reward in an arbitrary way. We show that the minimum system regret of the decentralized MAB grows with time at the same logarithmic order as in the centralized counterpart where players act collectively as a single entity by exchanging observations and making decisions jointly. A decentralized policy is constructed to achieve this optimal order while ensuring fairness among players and without assuming any pre-agreement or information exchange among players. Based on a time-division fair sharing (TDFS) of the M best arms, the proposed policy is constructed and its order optimality is proven under a general reward model. Furthermore, the basic structure of the TDFS policy can be used with any order-optimal single-player policy to achieve order optimality in the decentralized setting. We also establish a lower bound on the system regret for a general class of decentralized polices, to which the proposed policy belongs. This problem finds potential applications in cognitive radio networks, multi-channel communication systems, multi-agent systems, web search and advertising, and social networks. |
Validation of the Korean version of the EORTC QLQ-C30 | This study evaluated the Korean version of the EORTC QLQ-C30 (version 3.0) in terms of psychometric properties and its validation. One hundred and seventy patients completed three questionnaires EORTC QLQ-C30, the Beck depression inventory (BDI), and a brief pain inventory (BPI). Multitrait scaling analyses demonstrated that all scales met multidimensional conceptualization criteria, in terms of convergence and discrimination validity. Cronbach's α coefficients for eight multiple-item scales were greater than 0.70, with the exception of cognitive functioning. All interscale correlations were statistically significant in the expected direction (p < 0.01). Multivariate analyses showed that physical and emotional functioning were significant explanatory variables for the global quality-of-life (QOL) scale (regression coefficients: 0.36, p < 0.001; and 0.37, p < 0.001; respectively). All scales were significantly associated with pain severity and interference of the BPI, and with the cognitive-affective and somatic scales of the BDI. The emotional-functioning scale was substantially correlated with the cognitive-affective scale and somatic scale of the BDI. These results demonstrate that the Korean version of the EORTC QLQ-C30 is a valid instrument for evaluating Korean-speaking patients with cancer, and can be used to distinguish clearly between subgroups of patients of differing performance status. |
Concept and Attention-Based CNN for Question Retrieval in Multi-View Learning | Question retrieval, which aims to find similar versions of a given question, is playing a pivotal role in various question answering (QA) systems. This task is quite challenging, mainly in regard to five aspects: synonymy, polysemy, word order, question length, and data sparsity. In this article, we propose a unified framework to simultaneously handle these five problems. We use the word combined with corresponding concept information to handle the synonymy problem and the polysemous problem. Concept embedding and word embedding are learned at the same time from both the context-dependent and context-independent views. To handle the word-order problem, we propose a high-level feature-embedded convolutional semantic model to learn question embedding by inputting concept embedding and word embedding. Due to the fact that the lengths of some questions are long, we propose a value-based convolutional attentional method to enhance the proposed high-level feature-embedded convolutional semantic model in learning the key parts of the question and the answer. The proposed high-level feature-embedded convolutional semantic model nicely represents the hierarchical structures of word information and concept information in sentences with their layer-by-layer convolution and pooling. Finally, to resolve data sparsity, we propose using the multi-view learning method to train the attention-based convolutional semantic model on question–answer pairs. To the best of our knowledge, we are the first to propose simultaneously handling the above five problems in question retrieval using one framework. Experiments on three real question-answering datasets show that the proposed framework significantly outperforms the state-of-the-art solutions. |
The Foundations of Literacy Development in Children at Familial Risk of Dyslexia | The development of reading skills is underpinned by oral language abilities: Phonological skills appear to have a causal influence on the development of early word-level literacy skills, and reading-comprehension ability depends, in addition to word-level literacy skills, on broader (semantic and syntactic) language skills. Here, we report a longitudinal study of children at familial risk of dyslexia, children with preschool language difficulties, and typically developing control children. Preschool measures of oral language predicted phoneme awareness and grapheme-phoneme knowledge just before school entry, which in turn predicted word-level literacy skills shortly after school entry. Reading comprehension at 8½ years was predicted by word-level literacy skills at 5½ years and by language skills at 3½ years. These patterns of predictive relationships were similar in both typically developing children and those at risk of literacy difficulties. Our findings underline the importance of oral language skills for the development of both word-level literacy and reading comprehension. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.