title
stringlengths
8
300
abstract
stringlengths
0
10k
Immunosuppressive and anti-inflammatory mechanisms of triptolide, the principal active diterpenoid from the Chinese medicinal herb Tripterygium wilfordii Hook. f.
Extracts of Tripterygium wilfordii hook. f. (leigong teng, Thundergod vine) are effective in traditional Chinese medicine for treatment of immune inflammatory diseases including rheumatoid arthritis, systemic lupus erythematosus, nephritis and asthma. Characterisation of the terpenoids present in extracts of Tripterygium identified triptolide, a diterpenoid triepoxide, as responsible for most of the immunosuppressive, anti-inflammatory and antiproliferative effects observed in vitro. Triptolide inhibits lymphocyte activation and T-cell expression of interleukin-2 at the level of transcription. In all cell types examined, triptolide inhibits nuclear factor-kappaB transcriptional activation at a unique step in the nucleus after binding to DNA. Further characterisation of the molecular mechanisms of triptolide action will serve to elucidate pathways of immune system regulation.
Compressive Sensing Image Sensors-Hardware Implementation
The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal-oxide-semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed.
Impact of Individual and Environmental Socioeconomic Status on Peritoneal Dialysis Outcomes: A Retrospective Multicenter Cohort Study
OBJECTIVES We aimed to explore the impacts of individual and environmental socioeconomic status (SES) on the outcome of peritoneal dialysis (PD) in regions with significant SES disparity, through a retrospective multicenter cohort in China. METHODS Overall, 2,171 incident patients from seven PD centers were included. Individual SES was evaluated from yearly household income per person and education level. Environmental SES was represented by regional gross domestic product (GDP) per capita and medical resources. Undeveloped regions were defined as those with regional GDP lower than the median. All-cause and cardiovascular death and initial peritonitis were recorded as outcome events. RESULTS Poorer PD patients or those who lived in undeveloped areas were younger and less-educated and bore a heavier burden of medical expenses. They had lower hemoglobin and serum albumin at baseline. Low income independently predicted the highest risks for all-cause or cardiovascular death and initial peritonitis compared with medium and high income. The interaction effect between individual education and regional GDP was determined. In undeveloped regions, patients with an elementary school education or lower were at significantly higher risk for all-cause death but not cardiovascular death or initial peritonitis compared with those who attended high school or had a higher diploma. Regional GDP was not associated with any outcome events. CONCLUSION Low personal income independently influenced all-cause and cardiovascular death, and initial peritonitis in PD patients. Education level predicted all-cause death only for patients in undeveloped regions. For PD patients in these high risk situations, integrated care before dialysis and well-constructed PD training programs might be helpful.
Constructing attack scenarios through correlation of intrusion alerts
Traditional intrusion detection systems (IDSs) focus on low-level attacks or anomalies, and raise alerts independently, though there may be logical connections between them. In situations where there are intensive intrusions, not only will actual alerts be mixed with false alerts, but the amount of alerts will also become unmanageable. As a result, it is difficult for human users or intrusion response systems to understand the alerts and take appropriate actions. This paper presents a practical technique to address this issue. The proposed approach constructs attack scenarios by correlating alerts on the basis of prerequisites and consequences of intrusions. Intuitively, the prerequisite of an intrusion is the necessary condition for the intrusion to be successful, while the consequence of an intrusion is the possible outcome of the intrusion. Based on the prerequisites and consequences of different types of attacks, the proposed approach correlates alerts by (partially) matching the consequence of some previous alerts and the prerequisite of some later ones. The contribution of this paper includes a formal framework for alert correlation, the implementation of an off-line alert correlator based on the framework, and the evaluation of our method with the 2000 DARPA intrusion detection scenario specific datasets. Our experience and experimental results have demonstrated the potential of the proposed method and its advantage over alternative methods.
DarT: The embryo test with the Zebrafish Danio rerio--a general model in ecotoxicology and toxicology.
The acute fish test is an animal test whose ecotoxicological relevance is worthy of discussion. The primary aim of protection in ecotoxicology is the population and not the individual. Furthermore the concentration of pollutants in the environment is normally not in the lethal range. Therefore the acute fish test covers solely the situation after chemical spills. Nevertheless, acute fish toxicity data still belong to the base set used for the assessment of chemicals. The embryo test with the zebrafish Danio rerio (DarT) is recommended as a substitute for the acute fish test. For validation an international laboratory comparison test was carried out. A summary of the results is presented in this paper. Based on the promising results of testing chemicals and waste water the test design was validated by the DIN-working group "7.6 Fischei-Test". A normed test guideline for testing waste water with fish is available. The test duration is short (48 h) and within the test different toxicological endpoints can be examined. Endpoints from the embryo test are suitable for QSAR-studies. Besides the use in ecotoxicology the introduction as a toxicological model was investigated. Disturbance of pigmentation and effects on the frequency of heart-beat were examined. A further important application is testing of teratogenic chemicals. Based on the results DarT could be a screening test within preclinical studies.
Adversarial Learning of Semantic Relevance in Text to Image Synthesis
We describe a new approach that improves the training of generative adversarial nets (GANs) for synthesizing diverse images from a text input. Our approach is based on the conditional version of GANs and expands on previous work leveraging an auxiliary task in the discriminator. Our generated images are not limited to certain classes and do not suffer from mode collapse while semantically matching the text input. A key to our training methods is how to form positive and negative training examples with respect to the class label of a given image. Instead of selecting random training examples, we perform negative sampling based on the semantic distance from a positive example in the class. We evaluate our approach using the Oxford-102 flower dataset, adopting the inception score and multi-scale structural similarity index (MS-SSIM) metrics to assess discriminability and diversity of the generated images. The empirical results indicate greater diversity in the generated images, especially when we gradually select more negative training examples closer to a positive example in the semantic space.
A stepwise approach to insulin therapy in patients with type 2 diabetes mellitus and basal insulin treatment failure.
OBJECTIVE To determine whether 1 or 2 preprandial injections before the meals of greatest glycemic impact can be as effective as 3 preprandial injections in patients with type 2 diabetes mellitus and basal insulin treatment failure. METHODS This was an open-label, parallel-group, 1:1:1 randomized study of adults with type 2 diabetes mellitus on oral antidiabetic drugs with glycated hemoglobin (A1C) levels of 8.0% or greater. After a 14-week run-in with insulin glargine, patients with an A1C level greater than 7.0% were randomly assigned to 1, 2, or 3 time(s) daily insulin glulisine for 24 weeks. Changes in A1C from randomization to study end; percentage of patients achieving an A1C level less than 7.0%; changes in A1C, fasting glucose concentrations, and weight at individual study points; and safety (adverse events and hypoglycemia) were assessed throughout the study. RESULTS Three hundred forty-three of 631 patients (54%) completing the run-in phase with insulin glargine were randomly assigned to treatment arms. During the randomization phase, A1C reductions with insulin glulisine once or twice daily were noninferior to insulin glulisine 3 times daily (confidence intervals: -0.39 to 0.36 and -0.30 to 0.43; P>.5 for both). However, more patients met the target A1C with 3 preprandial injections (46 [46%]) than with 2 injections (34 [33%]) or 1 injection (30 [30%]). Severe hypoglycemia occurred in twice as many patients receiving 3 preprandial injections (16%) compared with those receiving 2 injections (8%) and 1 injection (7%), but these differences did not reach significance. CONCLUSION This study provides evidence that initiation of prandial insulin in a simplified stepwise approach is an effective alternative to the current routine 3 preprandial injection basal-bolus approach.
Time-Aware Authorship Attribution for Short Text Streams
Identifying authors of short texts on Internet or social media based communication systems is an important tool against fraud and cybercrimes. Besides the challenges raised by the limited length of these short messages, evolving language and writing styles of authors of these texts makes authorship attribution difficult. Most current short text authorship attribution approaches only address the challenge of limited text length. However, neglecting the second challenge may lead to poor performance of authorship attribution for authors who change their writing styles. In this paper, we analyse the temporal changes of word usage by authors of tweets and emails and based on this analysis we propose an approach to estimate the dynamicity of authors' word usage. The proposed approach is inspired by time-aware language models and can be employed in any time-unaware authorship attribution method. Our experiments on Tweets and the Enron email dataset show that the proposed time-aware authorship attribution approach significantly outperforms baselines that neglect the dynamicity of authors.
CYCLOSA: Decentralizing Private Web Search through SGX-Based Browser Extensions
By regularly querying Web search engines, users (unconsciously) disclose large amounts of their personal data as part of their search queries, among which some might reveal sensitive information (e.g. health issues, sexual, political or religious preferences). Several solutions exist to allow users querying search engines while improving privacy protection. However, these solutions suffer from a number of limitations: some are subject to user re-identification attacks, while others lack scalability or are unable to provide accurate results. This paper presents CYCLOSA, a secure, scalable and accurate private Web search solution. CYCLOSA improves security by relying on trusted execution environments (TEEs) as provided by Intel SGX. Further, CYCLOSA proposes a novel adaptive privacy protection solution that reduces the risk of user re-identification. CYCLOSA sends fake queries to the search engine and dynamically adapts their count according to the sensitivity of the user query. In addition, CYCLOSA meets scalability as it is fully decentralized, spreading the load for distributing fake queries among other nodes. Finally, CYCLOSA achieves accuracy of Web search as it handles the real query and the fake queries separately, in contrast to other existing solutions that mix fake and real query results.
Learning to Select Pre-Trained Deep Representations with Bayesian Evidence Framework
We propose a Bayesian evidence framework to facilitate transfer learning from pre-trained deep convolutional neural networks (CNNs). Our framework is formulated on top of a least squares SVM (LS-SVM) classifier, which is simple and fast in both training and testing, and achieves competitive performance in practice. The regularization parameters in LS-SVM is estimated automatically without grid search and cross-validation by maximizing evidence, which is a useful measure to select the best performing CNN out of multiple candidates for transfer learning, the evidence is optimized efficiently by employing Aitken's delta-squared process, which accelerates convergence of fixed point update. The proposed Bayesian evidence framework also provides a good solution to identify the best ensemble of heterogeneous CNNs through a greedy algorithm. Our Bayesian evidence framework for transfer learning is tested on 12 visual recognition datasets and illustrates the state-of-the-art performance consistently in terms of prediction accuracy and modeling efficiency.
SiGe Analog AGC Circuit for an 802.11a WLAN Direct Conversion Receiver
This brief presents a baseband automatic gain control (AGC) circuit for an IEEE 802.11a wireless local area network (WLAN) direct conversion receiver. The whole receiver is to be fully integrated in a low-cost 0.25- mum 75-GHz SiGe bipolar complementary metal-oxide-semiconductor (BiCMOS) process; thus, the AGC has been implemented in this technology by employing newly designed cells, such as a linear variable gain amplifier (VGA) and a fast-settling peak detector. Due to the stringent settling-time constraints of this system, a feedforward gain control architecture is proposed to achieve fast convergence. The proposed AGC is composed of two coarse-gain stages and a fine-gain stage, with a feedforward control loop for each stage. It converges with a gain error of below plusmn1 dB in less than 3.2 mus, whereas the power and area consumption are 13.75 mW and 0.225 mm2 , respectively.
MindfulWatch: A Smartwatch-Based System For Real-Time Respiration Monitoring During Meditation
With a wealth of scientifically proven health benefits, meditation was enjoyed by about 18 million people in the U.S. alone, as of 2012. Yet, there remains a stunning lack of convenient tools for promoting long-term and effective meditation practice. In this paper, we present MindfulWatch, a practical smartwatch-based sensing system that monitors respiration in real-time during meditation -- offering essential biosignals that can potentially be used to empower various future applications such as tracking changes in breathing pattern, offering real-time guidance, and providing an accurate bio-marker for meditation research. To this end, MindfulWatch is designed to be convenient for everyday use with no training required. Operating solely on a smartwatch, MindfulWatch can immediately reach the growing population of smartwatch users, making it ideal for longitudinal data collection for meditation studies. Specifically, it utilizes motion sensors to sense the subtle “micro” wrist rotation (0.01 rad/s) induced by respiration. To accurately capture breathing, we developed a novel self-adaptive model that tracks changes in both breathing pattern and meditation posture over time. MindfulWatch was evaluated based on data from 36 real-world meditation sessions (8.7 hours, 11 subjects). The results suggest that MindfulWatch offers reliable real-time respiratory timing measurement (70% errors under 0.5 seconds).
Industrial-Size Scheduling with ASP+CP
Answer Set Programming (ASP) combines a powerful, theoretically principled knowledge representation formalism and powerful solvers. To improve efficiency of computation on certain classes of problems, researchers have recently developed hybrid languages and solvers, combining ASP with language constructs and solving techniques from Constraint Programming (CP). The resulting ASP+CP solvers exhibit remarkable performance on “toy” problems. To the best of our knowledge, however, no hybrid ASP+CP language and solver have been used in practical, industrial-size applications. In this paper, we report on the first such successful application, consisting of the use of the hybrid ASP+CP system ezcsp to solve sophisticated industrial-size scheduling problems.
Overview of the 2017 WHO Classification of Pituitary Tumors.
This review focuses on discussing the main changes on the upcoming fourth edition of the WHO Classification of Tumors of the Pituitary Gland emphasizing histopathological and molecular genetics aspects of pituitary neuroendocrine (i.e., pituitary adenomas) and some of the non-neuroendocrine tumors involving the pituitary gland. Instead of a formal review, we introduced the highlights of the new WHO classification by answering select questions relevant to practising pathologists. The revised classification of pituitary adenomas, in addition to hormone immunohistochemistry, recognizes the role of other immunohistochemical markers including but not limited to pituitary transcription factors. Recognizing this novel approach, the fourth edition of the WHO classification has abandoned the concept of "a hormone-producing pituitary adenoma" and adopted a pituitary adenohypophyseal cell lineage designation of the adenomas with subsequent categorization of histological variants according to hormone content and specific histological and immunohistochemical features. This new classification does not require a routine ultrastructural examination of these tumors. The new definition of the Null cell adenoma requires the demonstration of immunonegativity for pituitary transcription factors and adenohypophyseal hormones Moreover, the term of atypical pituitary adenoma is no longer recommended. In addition to the accurate tumor subtyping, assessment of the tumor proliferative potential by mitotic count and Ki-67 index, and other clinical parameters such as tumor invasion, is strongly recommended in individual cases for consideration of clinically aggressive adenomas. This classification also recognizes some subtypes of pituitary neuroendocrine tumors as "high-risk pituitary adenomas" due to the clinical aggressive behavior; these include the sparsely granulated somatotroph adenoma, the lactotroph adenoma in men, the Crooke's cell adenoma, the silent corticotroph adenoma, and the newly introduced plurihormonal Pit-1-positive adenoma (previously known as silent subtype III pituitary adenoma). An additional novel aspect of the new WHO classification was also the definition of the spectrum of thyroid transcription factor-1 expressing pituitary tumors of the posterior lobe as representing a morphological spectrum of a single nosological entity. These tumors include the pituicytoma, the spindle cell oncocytoma, the granular cell tumor of the neurohypophysis, and the sellar ependymoma.
Using imagination to understand the neural basis of episodic memory.
Functional MRI (fMRI) studies investigating the neural basis of episodic memory recall, and the related task of thinking about plausible personal future events, have revealed a consistent network of associated brain regions. Surprisingly little, however, is understood about the contributions individual brain areas make to the overall recollective experience. To examine this, we used a novel fMRI paradigm in which subjects had to imagine fictitious experiences. In contrast to future thinking, this results in experiences that are not explicitly temporal in nature or as reliant on self-processing. By using previously imagined fictitious experiences as a comparison for episodic memories, we identified the neural basis of a key process engaged in common, namely scene construction, involving the generation, maintenance and visualization of complex spatial contexts. This was associated with activations in a distributed network, including hippocampus, parahippocampal gyrus, and retrosplenial cortex. Importantly, we disambiguated these common effects from episodic memory-specific responses in anterior medial prefrontal cortex, posterior cingulate cortex and precuneus. These latter regions may support self-schema and familiarity processes, and contribute to the brain's ability to distinguish real from imaginary memories. We conclude that scene construction constitutes a common process underlying episodic memory and imagination of fictitious experiences, and suggest it may partially account for the similar brain networks implicated in navigation, episodic future thinking, and the default mode. We suggest that additional brain regions are co-opted into this core network in a task-specific manner to support functions such as episodic memory that may have additional requirements.
Comparison of erythropoietin resistance in hemodialysis patients using calcitriol, cinacalcet, or paricalcitol.
The erythropoiesis-stimulating agent (ESA) hyporesponsiveness index (EHRI) calculated as the weekly dose of EPO divided by weight (kg) divided by hemoglobin level (g/dL) has been considered useful to assess ESA resistance. Recent evidence suggests that active vitamin D, cinacalcet, and paricalcitol use may be related with lower ESA resistance. We conducted this observational cross-sectional study to investigate ESA resistance calculated by the EHRI among patients using calcitriol, cinacalcet, and paricalcitol. Participants underwent a medical history taken, physical examination, measurement of biochemical analysis, calculation of dialysis adequacy, and EHRI. Sixty-five patients did not receive any treatment regarding vitamin D, paricalcitol, and cinacalcet (group 1), 41 were taking only vitamin D (group 2), 50 were taking only paricalcitol (group 3), 19 were taking only cinacalcet (group 4), and 21 were taking paricalcitol + cinacalcet (group 5). The EHRI values for groups 1, 2, 3, 4, and 5 were 11.36 ± 8.72, 11.58 ± 5.72, 8.29 ± 5.54, 9.49 ± 4.61, and 8.91 ± 4.44 respectively (P =.034). Post hoc analysis showed that the EHRI differed between group 1 and group 3 (P =.017) and between group 2 and group 3 (P =.006). In linear regression analysis, use of paricalcitol was independently associated with EHRI. In conclusion, paricalcitol use was associated with lower EHRI levels as a measure of ESA resistance.
Engineering Secure Software and Systems
Policy-based access control is a technology that achieves separation of concerns through evaluating an externalized policy at each access attempt. While this approach has been well-established for request-response applications, it is not supported for database queries of data-driven applications, especially for attribute-based policies. In particular, search operations for such applications involve poor scalability with regard to the data set size for this approach, because they are influenced by dynamic runtime conditions. This paper proposes a scalable application-level middleware solution that performs runtime injection of the appropriate rules into the original search query, so that the result set of the search includes only items to which the subject is entitled. Our evaluation shows that our method scales far better than current state of practice approach that supports policy-based access control.
Machines that Go 'Ping': Medical Technology and Health Expenditures in OECD Countries.
Technology is believed to be a major determinant of increasing health spending. The main difficulty to quantify its effect is to find suitable proxies to measure medical technological innovation. This paper's main contribution is the use of data on approved medical devices and drugs to proxy for medical technology. The effects of these variables on total real per capita health spending are estimated using a panel model for 18 Organisation for Economic Co-operation and Development (OECD) countries covering the period 1981-2012. The results confirm the substantial cost-increasing effect of medical technology, which accounts for almost 50% of the explained historical growth of spending. Despite the overall net positive effect of technology, the effect of two subgroups of approvals on expenditure is significantly negative. These subgroups can be thought of as representing 'incremental medical innovation', whereas the positive effects are related to radically innovative pharmaceutical products and devices. A separate time series model was estimated for the USA because the FDA approval data in fact only apply to the USA, while they serve as proxies for the other OECD countries. Our empirical model includes an indicator of obesity, and estimations confirm the substantial contribution of this lifestyle variable to health spending growth in the countries studied.
Modelling mobility in disaster area scenarios
This paper provides a model that realistically represents the movements in a disaster area scenario. The model is based on an analysis of tactical issues of civil protection. This analysis provides characteristics influencing network performance in public safety communication networks like heterogeneous area-based movement, obstacles, and joining/leaving of nodes. As these characteristics cannot be modelled with existing mobility models, we introduce a new disaster area mobility model. To examine the impact of our more realistic modelling, we compare it to existing ones (modelling the same scenario) using different pure movement and link based metrics. The new model shows specific characteristics like heterogeneous node density. Finally, the impact of the new model is evaluated in an exemplary simulative network performance analysis. The simulations show that the new model discloses new information and has a significant impact on performance analysis.
Software Measurement: A Necessary Scientific Basis
AbstructSoftware measurement, like measurement in any other discipline, must adhere to the science of measurement if it is to gain widespread acceptance and validity. The observation of some very simple, but fundamental, principles of measurement can have an extremely beneficial effect on the subject. Measurement theory is used to highlight both weaknesses and strengths of software metrics work, including work on metrics validation. We identify a problem with the well-known Weyuker properties, but also show that a criticism of these properties by Cherniavsky and Smith is invalid. We show that the search for general software complexity measures is doomed to failure. However, the theory does help us to define and validate measures of specific complexity attributes. Above all, we are able to view software measurement in a very wide perspective, rationalising and relating its many diverse activities.
Constraints on protein evolution and the age of the eubacteria/eukaryote split.
This article discusses research done by Doolittle et al. to determine the evolutionary distance between eubacteria and eukaryotes. Several equations for estimating evolutionary distances and divergence times are considered. Molecular clock calculations and their effects on these estimates are discussed. The authors conclude that their research supports the range of dates for the cenancestor of modern eubacteria, archaebacteria, and eukaryotes estimated by Doolittle et al.
Management challenges in creating value from business analytics
The popularity of big data and business analytics has increased tremendously in the last decade and a key challenge for organizations is in understanding how to leverage them to create business value. However, while the literature acknowledges the importance of these topics little work has addressed them from the organization‟s point of view. This paper investigates the challenges faced by organizational managers seeking to become more data and information-driven in order to create value. Empirical research comprised a mixed methods approach using (1) a Delphi study with practitioners through various forums and (2) interviews with business analytics managers in three case organizations. The case studies reinforced the Delphi findings and highlighted several challenge focal areas: organizations need a clear data and analytics strategy, the right people to effect a data-driven cultural change, and to consider data and information ethics when using data for competitive advantage. Further, becoming data-driven is not merely a technical issue and demands that organizations firstly organize their business analytics departments to comprise business analysts, data scientists, and IT personnel, and secondly align that business analytics capability with their business strategy in order to tackle the analytics challenge in a systemic and joined-up way. As a result, this paper presents a business analytics ecosystem for organizations that contributes to the body of scholarly knowledge by identifying key business areas and functions to address to achieve this transformation.
Trastuzumab in combination with chemotherapy versus chemotherapy alone for treatment of HER2-positive advanced gastric or gastro-oesophageal junction cancer (ToGA): a phase 3, open-label, randomised controlled trial
BACKGROUND Trastuzumab, a monoclonal antibody against human epidermal growth factor receptor 2 (HER2; also known as ERBB2), was investigated in combination with chemotherapy for first-line treatment of HER2-positive advanced gastric or gastro-oesophageal junction cancer. METHODS ToGA (Trastuzumab for Gastric Cancer) was an open-label, international, phase 3, randomised controlled trial undertaken in 122 centres in 24 countries. Patients with gastric or gastro-oesophageal junction cancer were eligible for inclusion if their tumours showed overexpression of HER2 protein by immunohistochemistry or gene amplification by fluorescence in-situ hybridisation. Participants were randomly assigned in a 1:1 ratio to receive a chemotherapy regimen consisting of capecitabine plus cisplatin or fluorouracil plus cisplatin given every 3 weeks for six cycles or chemotherapy in combination with intravenous trastuzumab. Allocation was by block randomisation stratified by Eastern Cooperative Oncology Group performance status, chemotherapy regimen, extent of disease, primary cancer site, and measurability of disease, implemented with a central interactive voice recognition system. The primary endpoint was overall survival in all randomised patients who received study medication at least once. This trial is registered with ClinicalTrials.gov, number NCT01041404. FINDINGS 594 patients were randomly assigned to study treatment (trastuzumab plus chemotherapy, n=298; chemotherapy alone, n=296), of whom 584 were included in the primary analysis (n=294; n=290). Median follow-up was 18.6 months (IQR 11-25) in the trastuzumab plus chemotherapy group and 17.1 months (9-25) in the chemotherapy alone group. Median overall survival was 13.8 months (95% CI 12-16) in those assigned to trastuzumab plus chemotherapy compared with 11.1 months (10-13) in those assigned to chemotherapy alone (hazard ratio 0.74; 95% CI 0.60-0.91; p=0.0046). The most common adverse events in both groups were nausea (trastuzumab plus chemotherapy, 197 [67%] vs chemotherapy alone, 184 [63%]), vomiting (147 [50%] vs 134 [46%]), and neutropenia (157 [53%] vs 165 [57%]). Rates of overall grade 3 or 4 adverse events (201 [68%] vs 198 [68%]) and cardiac adverse events (17 [6%] vs 18 [6%]) did not differ between groups. INTERPRETATION Trastuzumab in combination with chemotherapy can be considered as a new standard option for patients with HER2-positive advanced gastric or gastro-oesophageal junction cancer. FUNDING F Hoffmann-La Roche.
Familles de caractères de groupes de réflexions complexes
Nous étudions certains types de blocs d’algèbres de Hecke associées aux groupes de réflexions complexes qui généralisent les familles de caractères définies par Lusztig pour les groupes de Weyl. Nous déterminons ces blocs pour les groupes de réflexions spetsiaux et nous établissons un théorème de compatibilité entre familles et d-séries de Harish-Chandra.
Content-Based Table Retrieval for Web Queries
Understanding the connections between unstructured text and semi-structured table is an important yet neglected problem in natural language processing. In this work, we focus on content-based table retrieval. Given a query, the task is to find the most relevant table from a collection of tables. Further progress towards improving this area requires powerful models of semantic matching and richer training and evaluation resources. To remedy this, we present a ranking based approach, and implement both carefully designed features and neural network architectures to measure the relevance between a query and the content of a table. Furthermore, we release an open-domain dataset that includes 21,113 web queries for 273,816 tables. We conduct comprehensive experiments on both real world and synthetic datasets. Results verify the effectiveness of our approach and present the challenges for this task.
Changes in implicit and explicit self-esteem following cognitive and psychodynamic therapy in social anxiety disorder.
The present investigation is the first to analyse changes in implicit and explicit self-esteem following cognitive therapy (CT) and psychodynamic therapy (PDT) in social anxiety disorder (SAD). We assessed a sub-sample of patients with SAD (n=27 per treatment group, n=12 waitlist condition) in the course of a randomized controlled trial prior to and following individual treatment or wait assessment with an Implicit Association Test and the Rosenberg Self-Esteem Scale. Both CT and PDT consisted of 25 sessions. Treatments were effective in enhancing implicit and explicit self-esteem. In CT and PDT, changes in explicit self-esteem were associated with SAD symptom change. No such relationships were found in implicit self-esteem. The results seem to indicate that both CT and PDT are effective in establishing a positive implicit and explicit self-esteem in SAD. The differential relationships of changes in implicit and explicit self-esteem to treatment effects on social phobic symptoms are discussed.
Global Correlation Based Ground Plane Estimation Using V-Disparity Image
This paper presents the estimation of the position of the ground plane for navigation of on-road or off-road vehicles, in particular for obstacle detection using stereo vision. Ground plane estimation plays an important role in stereo vision based obstacle detection tasks. V-disparity image is widely used for ground plane estimation. However, it heavily relies on distinct road features which may not exist. In here, we introduce a global correlation method to extract the position of the ground plane in V-disparity image even without distinct road features.
Multi‐Center, Community‐Based Cardiac Implantable Electronic Devices Registry: Population, Device Utilization, and Outcomes
BACKGROUND The purpose of this study is to describe key elements, clinical outcomes, and potential uses of the Kaiser Permanente-Cardiac Device Registry. METHODS AND RESULTS This is a cohort study of implantable cardioverter defibrillators (ICD), pacemakers (PM), and cardiac resynchronization therapy (CRT) devices implanted between January 1, 2007 and December 31, 2013 by ≈400 physicians in 6 US geographical regions. Registry data variables, including patient characteristics, comorbidities, indication for procedures, complications, and revisions, were captured using the healthcare system's electronic medical record. Outcomes were identified using electronic screening algorithms and adjudicated via chart review. There were 11 924 ICDs, 33 519 PMs, 4472 CRTs, and 66 067 leads registered. A higher proportion of devices were implanted in males: 75.1% (ICD), 55.0% (PM), and 66.7% (CRT), with mean patient age 63.2 years (ICD), 75.2 (PM), and 67.2 (CRT). The 30-day postoperative incidence of tamponade, hematoma, and pneumothorax were ≤0.3% (ICD), ≤0.6% (PM), and ≤0.4% (CRT). Device failures requiring revision occurred at a rate of 2.17% for ICDs, 0.85% for PMs, and 4.93% for CRTs, per 100 patient observation years. Superficial infection rates were <0.03% for all devices; deep infection rates were 0.6% (ICD), 0.5% (PM), and 1.0% (CRT). Results were used to monitor vendor-specific variations and were systematically shared with individual regions to address potential variations in outcomes, utilization, and to assist with the management of device recalls. CONCLUSIONS The Kaiser Permanente-Cardiac Device Registry is a robust tool to monitor postprocedural patient outcomes and postmarket surveillance of implants and potentially change practice patterns.
Ontology learning from text: A look back and into the future
Ontologies are often viewed as the answer to the need for interoperable semantics in modern information systems. The explosion of textual information on the Read/Write Web coupled with the increasing demand for ontologies to power the Semantic Web have made (semi-)automatic ontology learning from text a very promising research area. This together with the advanced state in related areas, such as natural language processing, have fueled research into ontology learning over the past decade. This survey looks at how far we have come since the turn of the millennium and discusses the remaining challenges that will define the research directions in this area in the near future.
Joint Event Trigger Identification and Event Coreference Resolution with Structured Perceptron
Events and their coreference offer useful semantic and discourse resources. We show that the semantic and discourse aspects of events interact with each other. However, traditional approaches addressed event extraction and event coreference resolution either separately or sequentially, which limits their interactions. This paper proposes a document-level structured learning model that simultaneously identifies event triggers and resolves event coreference. We demonstrate that the joint model outperforms a pipelined model by 6.9 BLANC F1 and 1.8 CoNLL F1 points in event coreference resolution using a corpus in the biology domain.
Modeling User Concerns in the App Store: A Case Study on the Rise and Fall of Yik Yak
Mobile application (app) stores have lowered the barriers to app market entry, leading to an accelerated and unprecedented pace of mobile software production. To survive in such a highly competitive and vibrant market, release engineering decisions should be driven by a systematic analysis of the complex interplay between the user, system, and market components of the mobile app ecosystem. To demonstrate the feasibility and value of such analysis, in this paper, we present a case study on the rise and fall of Yik Yak, one of the most popular social networking apps at its peak. In particular, we identify and analyze the design decisions that led to the downfall of Yik Yak and track rival apps' attempts to take advantage of this failure. We further perform a systematic in-depth analysis to identify the main user concerns in the domain of anonymous social networking apps and model their relations to the core features of the domain. Such a model can be utilized by app developers to devise sustainable release engineering strategies that can address urgent user concerns and maintain market viability.
Development of a Funicular Flexible Crawler for Colonoscopy*
Recently, the number of colorectal cancer and polyp patients has been increasing. Colonoscopies are often used for diagnosis and treatment of rectal and colon diseases. Passing an endoscope into the large intestine is an extremely difficult operation in conventional large intestine endoscopy. Therefore, many robots to facilitate that operation have been developed recently. Nevertheless, they cannot move independently or quickly. We propose a simple and compact crawler mechanism for colonoscopy. It is suitable for propulsion through narrow spaces because it is driven by a geared motor arranged outside of the body through a flexible shaft. Through experimentation with a large intestine phantom, we confirmed that the proposed crawler can run quickly from the anus to the cecum. As described herein, this propulsion mechanism is applicable for self-traveling colonoscopy.
Improving blood pressure control in end stage renal disease through a supportive educative nursing intervention.
Hypertension in patients on hemodialysis (HD) contributes significantly to their morbidity and mortality. This study examined whether a supportive nursing intervention incorporating monitoring, goal setting, and reinforcement can improve blood pressure (BP) control in a chronic HD population. A randomized controlled design was used and 118 participants were recruited from six HD units in the Detroit metro area. The intervention consisted of (1) BP education sessions; (2) a 12-week intervention, including monitoring, goal setting, and reinforcement; and (3) a 30-day post-intervention follow-up period. Participants in the treatment were asked to monitor their BP, sodium, and fluid intake weekly for 12 weeks in weekly logs. BP, fluid and sodium logs were reviewed weekly with the researcher to determine if goals were met or not met. Reinforcement was given for goals met and problem solving offered when goals were not met. The control group received standard care. Both systolic and diastolic BPs were significantly decreased in the treatment group.
A Brief Survey on Taxonomy Construction Techniques
Taxonomy is a hierarchical structure consisting of hyponym-hypernym (is-a) relation. Although there are several manually built taxonomy available online, automatic taxonomy construction from text corpus has received lots of interest and many related methods are proposed in recent years. In this survey, we will go through some important techniques in taxonomy construction framework. We will also introduce some new research directions on taxonomy construction that are worth exploring in the future.
Vasoconstrictor effect of the angiotensin-converting enzyme-resistant, chymase-specific substrate [Pro(11)(D)-Ala(12)] angiotensin I in human dorsal hand veins: in vivo demonstration of non-ace production of angiotensin II in humans.
BACKGROUND [Pro(11)(D)-Ala(12)] angiotensin I is an ACE-resistant substrate specific for chymase. We used this peptide to determine whether a functionally significant non-ACE angiotensin (Ang) II-generating pathway exists in human dorsal hand veins. METHODS AND RESULTS Using a modified Aellig technique, we studied the response to Ang I and [Pro(11)(D)-Ala(12)] Ang I in dorsal hand veins in vivo in patients with coronary heart disease. We measured the venoconstrictor effect of each peptide given before and after a 6.25-mg oral dose of the ACE inhibitor captopril or matching placebo. Placebo or captopril was given in a double-blind, randomized fashion. Ang I induced a mean+/-SEM venoconstrictor response of 45+/-11%, 40+/-10%, 55+/-8%, and 4+/-4% before placebo, after placebo, before captopril, and after captopril, respectively. Hence, the response to Ang I was reproducible and was reduced significantly only after treatment with captopril (P=0.002). [Pro(11)(D)-Ala(12)] Ang I induced a mean venoconstrictor response of 42+/-9%, 49+/-9%, 48+/-10%, and 54+/-11% before placebo, after placebo, before captopril, and after captopril, respectively. Hence, captopril had no significant effect on the response to [Pro(11)(D)-Ala(12)] Ang I. CONCLUSIONS We have demonstrated that [Pro(11)(D)-Ala(12)] Ang I is able to induce venoconstriction in humans in vivo. With this specific pharmacological probe, we have shown that a non-ACE pathway capable of generating Ang II exists in human veins in vivo and is potentially functionally important. This pathway is likely to involve the enzyme chymase.
Interpretation of Concerning Comfort of Office Building Design in Cold Regions
Abstract:This paper interpreted the comfort of office buildings in cold regions in the view of architecture. To make the architect to build a people-centered thinking in architectural design; The analysis of design start discussions from the relation between comfort and architectural design, through building orientation, building envelope structure, indoor fresh air supply and other elements to put forward designing direction in cold region office building .
Error and attack tolerance of complex networks
Communication/transportation systems are often subjected to failures and attacks. Here we represent such systems as networks and we study their ability to resist failures (attacks) simulated as the breakdown of a group of nodes of the network chosen at random (chosen accordingly to degree or load). We consider and compare the results for two di/erent network topologies: the Erd1 os–R2 enyi random graph and the Barab2 asi–Albert scale-free network. We also discuss brie5y a dynamical model recently proposed to take into account the dynamical redistribution of loads after the initial damage of a single node of the network. c © 2004 Elsevier B.V. All rights reserved. PACS: 89.75.−k; 89.75.Fb; 05.90.+m
A Three-Phase Current-Fed Push–Pull DC–DC Converter
In this paper, a new three-phase current-fed push-pull DC-DC converter is proposed. This converter uses a high-frequency three-phase transformer that provides galvanic isolation between the power source and the load. The three active switches are connected to the same reference, which simplifies the gate drive circuitry. Reduction of the input current ripple and the output voltage ripple is achieved by means of an inductor and a capacitor, whose volumes are smaller than in equivalent single-phase topologies. The three-phase DC-DC conversion also helps in loss distribution, allowing the use of lower cost switches. These characteristics make this converter suitable for applications where low-voltage power sources are used and the associated currents are high, such as in fuel cells, photovoltaic arrays, and batteries. The theoretical analysis, a simplified design example, and the experimental results for a 1-kW prototype will be presented for two operation regions. The prototype was designed for a switching frequency of 40 kHz, an input voltage of 120 V, and an output voltage of 400 V.
A real-time method for depth enhanced visual odometry
Visual odometry can be augmented by depth information such as provided by RGB-D cameras, or from lidars associated with cameras. However, such depth information can be limited by the sensors, leaving large areas in the visual images where depth is unavailable. Here, we propose a method to utilize the depth, even if sparsely available, in recovery of camera motion. In addition, the method utilizes depth by structure from motion using the previously estimated motion, and salient visual features for which depth is unavailable. Therefore, the method is able to extend RGBD visual odometry to large scale, open environments where depth often cannot be sufficiently acquired. The core of our method is a bundle adjustment step that refines the motion estimates in parallel by processing a sequence of images, in a batch optimization. We have evaluated our method in three sensor setups, one using an RGB-D camera, and two using combinations of a camera and a 3D lidar. Our method is rated #4 on the KITTI odometry benchmark irrespective of sensing modality—compared to stereo visual odometry methods which retrieve depth by triangulation. The resulting average position error is 1.14% of the distance traveled.
Tag Clouds: Data Analysis Tool or Social Signaller?
We examine the recent information visualization phenomenon known as tag clouds, which are an interesting combination of data visualization, web design element, and social marker. Using qualitative methods, we find evidence that those who use tag clouds do so primarily because they are perceived as having an inherently social or personal component, in that they suggest what a person or a group of people is doing or is interested in, and to some degree how that changes over time; they are visually dynamic and thus suggest activity; they are a compact alternative to a long list; they signal that a site has tags; and they are perceived as being fun, popular, and/or hip. The primary reasons people object to tag clouds are their visual aesthetics, their questionable usability, their popularity among certain design circles, and what is perceived as a bias towards popular ideas and the downgrading of alternative views.
Compile-time dynamic voltage scaling settings: opportunities and limits
With power-related concerns becoming dominant aspects of hardware and software design, significant research effort has been devoted towards system power minimization. Among run-time power-management techniques, dynamic voltage scaling (DVS) has emerged as an important approach, with the ability to provide significant power savings. DVS exploits the ability to control the power consumption by varying a processor's supply voltage (V) and clock frequency (f). DVS controls energy by scheduling different parts of the computation to different (V, f) pairs; the goal is to minimize energy while meeting performance needs. Although processors like the Intel XScale and Transmeta Crusoe allow software DVS control, such control has thus far largely been used at the process/task level under operating system control. This is mainly because the energy and time overhead for switching DVS modes is considered too large and difficult to manage within a single program.In this paper we explore the opportunities and limits of compile-time DVS scheduling. We derive an analytical model for the maximum energy savings that can be obtained using DVS given a few known program and processor parameters. We use this model to determine scenarios where energy consumption benefits from compile-time DVS and those where there is no benefit. The model helps us extrapolate the benefits of compile-time DVS into the future as processor parameters change. We then examine how much of these predicted benefits can actually be achieved through optimal settings of DVS modes. This is done by extending the existing Mixed-integer Linear Program (MILP) formulation for this problem by accurately accounting for DVS energy switching overhead, by providing finer-grained control on settings and by considering multiple data categories in the optimization. Overall, this research provides a comprehensive view of compile-time DVS management, providing both practical techniques for its immediate deployment as well theoretical bounds for use into the future.
An Anatomic Basis for Treatment of Retinal Artery Occlusions Caused by Hyaluronic Acid Injections: A Cadaveric Study
In ophthalmic artery occlusion by hyaluronic acid injection, the globe may get worse by direct intravitreal administration of hyaluronidase. Retrograde cannulation of the ophthalmic artery may have the potential for restoration of retinal perfusion and minimizing the risk of phthisis bulbi. The study investigated the feasibility of cannulation of the ophthalmic artery for retrograde injection. In 10 right orbits of 10 cadavers, cannulation and ink injection of the supraorbital artery in the supraorbital approach were performed under surgical loupe magnification. In 10 left orbits, the medial upper lid was curvedly incised to retrieve the retroseptal ophthalmic artery for cannulation by a transorbital approach. Procedural times were recorded. Diameters of related arteries were bilaterally measured for comparison. Dissections to verify dye distribution were performed. Cannulation was successfully performed in 100 % and 90 % of the transorbital and the supraorbital approaches, respectively. The transorbital approach was more practical to perform compared with the supraorbital approach due to a trend toward a short procedure time (18.4 ± 3.8 vs. 21.9 ± 5.0 min, p = 0.74). The postseptal ophthalmic artery exhibited a tortious course, easily retrieved and cannulated, with a larger diameter compared to the supraorbital artery (1.25 ± 0.23 vs. 0.84 ± 0.16 mm, p = 0.000). The transorbital approach is more practical than the supraorbital approach for retrograde cannulation of the ophthalmic artery. This study provides a reliable access route implication for hyaluronidase injection into the ophthalmic artery to salvage central retinal occlusion following hyaluronic acid injection. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
A randomized controlled trial of cell salvage in routine cardiac surgery.
BACKGROUND Previous trials have indicated that cell salvage may reduce allogeneic blood transfusion during cardiac surgery, but these studies have limitations, including inconsistent use of other blood transfusion-sparing strategies. We designed a randomized controlled trial to determine whether routine cell salvage for elective uncomplicated cardiac surgery reduces blood transfusion and is cost effective in the setting of a rigorous transfusion protocol and routine administration of antifibrinolytics. METHODS Two-hundred-thirteen patients presenting for first-time coronary artery bypass grafting and/or cardiac valve surgery were prospectively randomized to control or cell salvage groups. The latter group had blood aspirate during surgery and mediastinal drainage the first 6 h after surgery processed in a cell saver device and autotransfused. All patients received tranexamic acid and were subjected to an algorithm for red blood cell and hemostatic blood factor transfusion. RESULTS There was no difference between the two groups in the proportion of patients exposed to allogeneic blood (32% in both groups, relative risk 1.0 P = 0.89). At current blood products and cell saver prices, the use of cell salvage increased the costs per patient by a minimum of $103. When patients who had mediastinal re-exploration for bleeding were excluded (as planned in the protocol), significantly fewer units of allogeneic red blood cells were transfused in the cell salvage compared with the control group (65 vs 100 U, relative risk 0.71 P = 0.04). CONCLUSION In patients undergoing routine first-time cardiac surgery in an institution with a rigorous blood conservation program, the routine use of cell salvage does not further reduce the proportion of patients exposed to allogeneic blood transfusion. However, patients who do not have excessive bleeding after surgery receive significantly fewer units of blood with cell salvage. Although the use of cell savage may reduce the demand for blood products during cardiac surgery, this comes at an increased cost to the institution.
( DEPSCOR FY 09 ) Obfuscation and Deobfuscation of Intent of Computer Programs
This research aimed at developing a theoretical framework to predict the next obfuscation (or deobfuscation) move of the adversary, with the intent of making cyber defense proactive. More specifically, the goal was to understand the relationship between obfuscation and deobfuscation techniques employed in malware offense and defense. The strategy was to build upon previous work of Giacobazzi and Dalla Preda on modeling obfuscation and deobfuscation as abstract interpretations. It furthers that effort by developing an analytical model of the best obfuscation with respect to a deobfuscator. In addition, this research aimed at developing cost models for obfuscation and deobfuscations. The key findings of this research include: a theoretical model of computing the best obfuscation for a deobfuscator, a method for context-sensitive analysis of obfuscated code, a method for learning obfuscation transformations used by a metamorphic engine, several insights into the use of machine learning in deobfuscation, and game-theoretic models of certain scenarios of offense-defense games in software protection.
Pregnancy success following surgical correction of imperforate hymen and complete transverse vaginal septum.
Pregnancy success was evaluated in 48 women following surgical correction of a vaginal obstruction due to imperforate hymen (N = 22) or to a complete transverse vaginal septum (N = 26). Pregnancy success was more likely to occur following surgical correction of imperforate hymen (P less than .05). Patients with a complete transverse septum in the middle or upper vagina were less likely to conceive than were patients with a septum in the lower vagina. Prompt diagnosis and surgical correction to drain accumulated blood may preserve preserve fertility possibly through the prevention of endometriosis.
Automatic Clutter-Canceler for Microwave Life-Detection Systems
A microprocessor-controlled automatic cluttercancellation subsystem, consisting of a programmable microwave attenuator and a programmable microwave phase-shifter controlled by a microprocessor-based control unit, has been developed for a microwave life-detection system (L-band 2 GHz or X-band 10 GHz) which can remotely sense breathing and heartbeat movements of living subjects. This automatic cluttercancellation subsystem has drastically improved a very slow p~ocess .of manual clutter-cancellation adjustment in our preVIOU.S mlcro~av.e sys~em. ~his is very important for some potential applications mcludmg location of earthquake or avalanche-trapped victims through rubble. A series of experiments have been conducted to demonstrate the applicability of this microwave life-detection system for rescue purposes. The automatic clutter-canceler may also have a potential application in some CW radar systems.
Clinical outcomes in a cohort of Colombian patients with rheumatoid arthritis treated with Etanar, a new biologic type rhTNFR:Fc.
OBJECTIVES To evaluate the clinical response at 12 month in a cohort of patients with rheumatoid arthritis treated with Etanar (rhTNFR:Fc), and to register the occurrence of adverse effects. METHODS This is a multicentre observational cohort study. It included patients over 18 years of age with an active rheumatoid arthritis diagnosis for which the treating physician had begun a treatment scheme of 25 mg of subcutaneous etanercept (Etanar ® 25 mg: biologic type rhTNFR:Fc), twice per week. Follow-up was done during 12 months, with assessments at weeks 12, 24, 36 and 48. Evaluated outcomes included tender joint count, swollen joint count, ACR20, ACR50, ACR70, HAQ and DAS28. RESULTS One-hundred and five (105) subjects were entered into the cohort. The median of tender and swollen joint count, ranged from 19 and 14, respectively at onset to 1 at the 12th month. By month 12, 90.5% of the subjects reached ACR20, 86% ACR50, and 65% ACR70. The median of DAS28 went from 4.7 to 2, and the median HAQ went from 1.3 to 0.2. The rate of adverse effects was 14 for every 100 persons per year. No serious adverse effects were reported. The most frequent were pruritus (5 cases), and rhinitis (3 cases). CONCLUSIONS After a year of following up a patient cohort treated with etanercept 25 mg twice per week, significant clinical results were observed, resulting in adequate disease control in a high percentage of patients with an adequate level of safety.
Towards a theory of user judgment of aesthetics and user interface quality
The article introduces a framework for users' design quality judgments based on Adaptive Decision Making theory. The framework describes judgment on quality attributes (usability, content/functionality, aesthetics, customisation and engagement) with dependencies on decision making arising from the user's background, task and context. The framework is tested and refined by three experimental studies. The first two assessed judgment of quality attributes of websites with similar content but radically different designs for aesthetics and engagement. Halo effects were demonstrated whereby attribution of good quality on one attribute positively influenced judgment on another, even in the face of objective evidence to the contrary (e.g., usability errors). Users' judgment was also shown to be susceptible to framing effects of the task and their background. These appear to change the importance order of the quality attributes; hence, quality assessment of a design appears to be very context dependent. The third study assessed the influence of customisation by experiments on mobile services applications, and demonstrated that evaluation of customisation depends on the users' needs and motivation. The results are discussed in the context of the literature on aesthetic judgment, user experience and trade-offs between usability and hedonic/ludic design qualities.
Visual tracking with online Multiple Instance Learning
In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.
Adapting the Sample Size in Particle Filters Through KLD-Sampling
Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process. The key idea of the KLD-sampling method is to bound the approximation error introduced by the sample-based representation of the particle filter. The name KLD-sampling is due to the fact that we measure the approximation error by the Kullback-Leibler distance. Our adaptation approach chooses a small number of samples if the density is focused on a small part of the state space, and it chooses a large number of samples if the state uncertainty is high. Both the implementation and computation overhead of this approach are small. Extensive experiments using mobile robot localization as a test application show that our approach yields drastic improvements over particle filters with fixed sample set sizes and over a previously introduced adaptation technique.
Midge: Generating Image Descriptions From Computer Vision Detections
This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.
Analysis of S-shape Microstrip Patch Antenna for Bluetooth application
In this paper, S-shape microstrip patch antenna is investigated for wideband operation using circuit theory concept based on modal expansion cavity model. It is found that the antenna resonates at 2.62 GHz. The bandwidth of the S-shape microstrip patch antenna 21.62 % (theoretical) and 20.49% (simulated). The theoretical results are compared with IE3D simulation as well as reported experimental results and they are in close agreement.
Subpixel Multiplexing Method for 3D Lenticular Display
Three-dimensional display technologies based on lenticular sheet overlaid onto spatial light modulator screen have been studied for decades. However, the quality of these displays still suffers from insufficient number of views and zone-jumping between views. We present herein a subpixel multiplexing method in this paper. We propose to split mapping and alignment into two separate tasks, processed in parallel threads. Alignment thread deals with the task of computing the geometrical relationship between lenticular sheet and Liquid Crystal Display (LCD) panel for multiplexing. Afterwards, we conduct the multiplexing procedure through a box-constrained integer least squares algorithm. After multiplexing, each subpixel aggregated on the lenticular sheet is a multiplexing one that mixes up a number of subpixels in local region on the LCD plane. As a result, we multiplex subpixels on the synthetic image up to 27 views with a resolution of 1080 × 1920 and the rendering speed is 73.34 frames per second (fps).
A dynamic code coverage approach to maximize fault localization efficiency
Spectrum-based fault localization is amongst the most effective techniques for automatic fault localization. However, abstractions of program execution traces, one of the required inputs for this technique, require instrumentation of the software under test at a statement level of granularity in order to compute a list of potential faulty statements. This introduces a considerable overhead in the fault localization process, which can even become prohibitive in, e.g., resource constrained environments. To counter this problem, we propose a new approach, coined Dynamic Code Coverage (DCC), aimed at reducing this instrumentation overhead. This technique, by means of using coarser instrumentation, starts by analyzing coverage traces for large components of the system under test. It then progressively increases the instrumentation detail for faulty components, until the statement level of detail is reached. To assess the validity of our proposed approach, an empirical evaluation was performed, injecting faults in six real-world software projects. The empirical evaluation demonstrates that the dynamic code coverage approach reduces the execution overhead that exists in spectrum-based fault localization, and even presents a more concise potential fault ranking to the user. We have observed execution time reductions of 27% on average and diagnostic report size reductions of 77% on average.
One pixel attack for fooling deep neural networks
Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution. It requires less adversarial information and can fool more types of networks. The results show that 68.36% of the natural images in CIFAR10 test dataset and 41.22% of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22% and 5.52% confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks.
Modern code reviews in open-source projects: which problems do they fix?
Code review is the manual assessment of source code by humans, mainly intended to identify defects and quality problems. Modern Code Review (MCR), a lightweight variant of the code inspections investigated since the 1970s, prevails today both in industry and open-source software (OSS) systems. The objective of this paper is to increase our understanding of the practical benefits that the MCR process produces on reviewed source code. To that end, we empirically explore the problems fixed through MCR in OSS systems. We manually classified over 1,400 changes taking place in reviewed code from two OSS projects into a validated categorization scheme. Surprisingly, results show that the types of changes due to the MCR process in OSS are strikingly similar to those in the industry and academic systems from literature, featuring the similar 75:25 ratio of maintainability-related to functional problems. We also reveal that 7–35% of review comments are discarded and that 10–22% of the changes are not triggered by an explicit review comment. Patterns emerged in the review data; we investigated them revealing the technical factors that influence the number of changes due to the MCR process. We found that bug-fixing tasks lead to fewer changes and tasks with more altered files and a higher code churn have more changes. Contrary to intuition, the person of the reviewer had no impact on the number of changes.
The Jacob Cycle in Angels in America: Re-Performing Scripture Queerly
As theater is an art form that many times juxtaposes texts with the performing body, one of its main contributions to Jewish culture could be in its “corporeal reading” of Jewish textual heritage. Tony Kushner’s celebrated play Angels in America proposes such a reading of Jewish texts—mainly the biblical Jacob cycle and the Jewish tradition regarding angels—as a queer re-performance that emphasizes bodily aspects such as desire, sexuality, and disease. Analyzing the play alongside Jewish classical texts from the midrashic and exegetic traditions, this paper suggests that the re-performance of Jewish texts as queer repertoire in Angels in America raises questions about the politics of the body in contemporary culture while at the same time opening up new horizons within textual tradition itself, exploring new meanings within these texts—as well as the efficacy of their performance.
Mechanism of meningeal invasion by Neisseria meningitidis
The blood-cerebrospinal fluid barrier physiologically protects the meningeal spaces from blood-borne bacterial pathogens, due to the existence of specialized junctional interendothelial complexes. Few bacterial pathogens are able to reach the subarachnoidal space and among those, Neisseria meningitidis is the one that achieves this task the most constantly when present in the bloodstream. Meningeal invasion is a consequence of a tight interaction of meningococci with brain endothelial cells. This interaction, mediated by the type IV pili, is responsible for the formation of microcolonies on the apical surface of the cells. This interaction is followed by the activation of signaling pathways in the host cells leading to the formation of endothelial docking structures resembling those elicited by the interaction of leukocytes with endothelial cells during extravasation. The consequence of these bacterial-induced signaling events is the recruitment of intercellular junction components in the docking structure and the subsequent opening of the intercellular junctions.
Massive machine-type communications in 5g: physical and MAC-layer solutions
MTC are expected to play an essential role within future 5G systems. In the FP7 project METIS, MTC has been further classified into mMTC and uMTC. While mMTC is about wireless connectivity to tens of billions of machinetype terminals, uMTC is about availability, low latency, and high reliability. The main challenge in mMTC is scalable and efficient connectivity for a massive number of devices sending very short packets, which is not done adequately in cellular systems designed for human-type communications. Furthermore, mMTC solutions need to enable wide area coverage and deep indoor penetration while having low cost and being energy-efficient. In this article, we introduce the PHY and MAC layer solutions developed within METIS to address this challenge.
Partial external biliary diversion in children with progressive familial intrahepatic cholestasis and Alagille disease.
BACKGROUND Partial external biliary diversion (PEBD) is a promising treatment for children with progressive familial intrahepatic cholestasis (PFIC) and Alagille disease. Little is known about long-term outcomes. PATIENTS AND METHODS A retrospective chart review of all patients undergoing PEBD in the University Medical Centre of Groningen (UMCG). RESULTS Between 2000 and 2005, PEBD was performed on 14 children with severe pruritus (PFIC 11, mean age 5.3 +/- 4.4 years; Alagille 3, mean age 7.4 +/- 4.2 years). Stature was <-2 standard deviation score (SDS) in 50%. Median preoperative serum bile salt concentration was 318 micromol/L (range 23-527 micromol/L). Twenty-nine percent had severe liver fibrosis and 71% had mild or moderate fibrosis. Median follow-up was 3.1 years (range 2.0-5.7 years). One patient (7%) underwent a liver transplantation at 3.2 years post-PEBD. Two years postoperatively, 50% were without pruritus and 21% had mild pruritus. In 29%, pruritus had not diminished; 3 of them had severe fibrosis preoperatively. In patients with mild or moderate fibrosis, PEBD decreased serum bile salts (105 micromol/L [range 8-269 micromol/L] 2 years postoperatively). Bile salts did not decrease in the patients with severe fibrosis. Two years after PEBD, 27% had a stature below -2 SDS. CONCLUSIONS At median follow-up of 3.1 years after PEBD, pruritus has been relieved in 75%. Bile salts level and growth are improved in most patients. Longer follow-up is needed to determine whether PEBD can postpone or avoid the demand for liver transplantation.
A simpler model of software readability
Software readability is a property that influences how easily a given piece of code can be read and understood. Since readability can affect maintainability, quality, etc., programmers are very concerned about the readability of code. If automatic readability checkers could be built, they could be integrated into development tool-chains, and thus continually inform developers about the readability level of the code. Unfortunately, readability is a subjective code property, and not amenable to direct automated measurement. In a recently published study, Buse et al. asked 100 participants to rate code snippets by readability, yielding arguably reliable mean readability scores of each snippet; they then built a fairly complex predictive model for these mean scores using a large, diverse set of directly measurable source code properties. We build on this work: we present a simple, intuitive theory of readability, based on size and code entropy, and show how this theory leads to a much sparser, yet statistically significant, model of the mean readability scores produced in Buse's studies. Our model uses well-known size metrics and Halstead metrics, which are easily extracted using a variety of tools. We argue that this approach provides a more theoretically well-founded, practically usable, approach to readability measurement.
A Context-Aware User-Item Representation Learning for Item Recommendation
Both reviews and user-item interactions (i.e., rating scores) have been widely adopted for user rating prediction. However, these existing techniques mainly extract the latent representations for users and items in an independent and static manner. That is, a single static feature vector is derived to encode user preference without considering the particular characteristics of each candidate item. We argue that this static encoding scheme is incapable of fully capturing users’ preferences, because users usually exhibit different preferences when interacting with different items. In this article, we propose a novel context-aware user-item representation learning model for rating prediction, named CARL. CARL derives a joint representation for a given user-item pair based on their individual latent features and latent feature interactions. Then, CARL adopts Factorization Machines to further model higher order feature interactions on the basis of the user-item pair for rating prediction. Specifically, two separate learning components are devised in CARL to exploit review data and interaction data, respectively: review-based feature learning and interaction-based feature learning. In the review-based learning component, with convolution operations and attention mechanism, the pair-based relevant features for the given user-item pair are extracted by jointly considering their corresponding reviews. However, these features are only reivew-driven and may not be comprehensive. Hence, an interaction-based learning component further extracts complementary features from interaction data alone, also on the basis of user-item pairs. The final rating score is then derived with a dynamic linear fusion mechanism. Experiments on seven real-world datasets show that CARL achieves significantly better rating prediction accuracy than existing state-of-the-art alternatives. Also, with the attention mechanism, we show that the pair-based relevant information (i.e., context-aware information) in reviews can be highlighted to interpret the rating prediction for different user-item pairs.
Rethinking Access Control and Authentication for the Home Internet of Things (IoT)
Computing is transitioning from single-user devices to the Internet of Things (IoT), in which multiple users with complex social relationships interact with a single device. Currently deployed techniques fail to provide usable access-control specification or authentication in such settings. In this paper, we begin reenvisioning access control and authentication for the home IoT. We propose that access control focus on IoT capabilities (i. e., certain actions that devices can perform), rather than on a per-device granularity. In a 425-participant online user study, we find stark differences in participants’ desired access-control policies for different capabilities within a single device, as well as based on who is trying to use that capability. From these desired policies, we identify likely candidates for default policies. We also pinpoint necessary primitives for specifying more complex, yet desired, access-control policies. These primitives range from the time of day to the current location of users. Finally, we discuss the degree to which different authentication methods potentially support desired policies.
Effect of annealing time, weight pressure and cobalt doping on the electrical and magnetic behavior of barium titanate
BaTi0.5CO0.5O3 (BTCO) nanoparticles were prepared by the solid state reaction technique using different starting materials and the microstructure examined by XRD, FESEM, BDS and VSM. X-ray diffraction and electron diffraction patterns showed that the nanoparticles were the tetragonal BTCO phase. The BTCO nanoparticles prepared from the starting materials of as prepared titanium-oxide, Cobalt -oxide and barium carbonate have spherical grain morphology, an average size of 65 nm and a fairly narrow size distribution. The nano-scale presence and the formation of the tetragonal perovskite phase as well as the crystallinity were detected using the mentioned techniques. Dielectric properties of the samples were measured at different frequencies. Broadband dielectric spectroscopy is applied to investigate the electrical properties of disordered perovskite-like ceramics in a wide temperature range. The doped BTCO samples exhibited low loss factor at 1 kHz and 1 MHz frequencies respectively.
Unilateral Hydronephrosis and Renal Damage after Acute Leukemia
A 14-year-old boy presented with asymptomatic right hydronephrosis detected on routine yearly ultrasound examination. Previously, he had at least two normal renal ultrasonograms, 4 years after remission of acute myeloblastic leukemia, treated by AML-BFM-93 protocol. A function of the right kidney and no damage on the left was confirmed by a DMSA scan. Right retroperitoneoscopic nephrectomy revealed 3 renal arteries with the lower pole artery lying on the pelviureteric junction. Histologically chronic tubulointerstitial nephritis was detected. In the pathogenesis of this severe unilateral renal damage, we suspect the exacerbation of deleterious effects of cytostatic therapy on kidneys with intermittent hydronephrosis.
Radiation effect on viscous flow of a nanofluid and heat transfer over a nonlinearly stretching sheet
In this work, we study the flow and heat transfer characteristics of a viscous nanofluid over a nonlinearly stretching sheet in the presence of thermal radiation, included in the energy equation, and variable wall temperature. A similarity transformation was used to transform the governing partial differential equations to a system of nonlinear ordinary differential equations. An efficient numerical shooting technique with a fourth-order Runge-Kutta scheme was used to obtain the solution of the boundary value problem. The variations of dimensionless surface temperature, as well as flow and heat-transfer characteristics with the governing dimensionless parameters of the problem, which include the nanoparticle volume fraction ϕ, the nonlinearly stretching sheet parameter n, the thermal radiation parameter NR, and the viscous dissipation parameter Ec, were graphed and tabulated. Excellent validation of the present numerical results has been achieved with the earlier nonlinearly stretching sheet problem of Cortell for local Nusselt number without taking the effect of nanoparticles.
Color-based road detection and its evaluation on the KITTI road benchmark
Road detection is one of the key issues of scene understanding for Advanced Driving Assistance Systems (ADAS). Recent approaches has addressed this issue through the use of different kinds of sensors, features and algorithms. KITTI-ROAD benchmark has provided an open-access dataset and standard evaluation mean for road area detection. In this paper, we propose an improved road detection algorithm that provides a pixel-level confidence map. The proposed approach is inspired from our former work based on road feature extraction using illuminant intrinsic image and plane extraction from v-disparity map segmentation. In the former research, detection results of road area are represented by binary map. The novelty of this improved algorithm is to introduce likelihood theory to build a confidence map of road detection. Such a strategy copes better with ambiguous environments, compared to a simple binary map. Evaluations and comparisons of both, binary map and confidence map, have been done using the KITTI-ROAD benchmark.
Nephrotic Syndrome in Children: From Bench to Treatment
Idiopathic nephrotic syndrome (INS) is the most frequent form of NS in children. INS is defined by the association of the clinical features of NS with renal biopsy findings of minimal changes, focal segmental glomerulosclerosis (FSGS), or mesangial proliferation (MP) on light microscopy and effacement of foot processes on electron microscopy. Actually the podocyte has become the favourite candidate for constituting the main part of the glomerular filtration barrier. Most cases are steroid sensitive (SSINS). Fifty percents of the latter recur frequently and necessitate a prevention of relapses by nonsteroid drugs. On the contrary to SSINS, steroid resistant nephrotic syndrome (SRINS) leads often to end-stage renal failure. Thirty to forty percents of the latter are associated with mutations of genes coding for podocyte proteins. The rest is due to one or several different circulating factors. New strategies are in development to antagonize the effect of the latter.
Amorphous metal-organic frameworks.
Crystalline metal-organic frameworks (MOFs) are porous frameworks comprising an infinite array of metal nodes connected by organic linkers. The number of novel MOF structures reported per year is now in excess of 6000, despite significant increases in the complexity of both component units and molecular networks. Their regularly repeating structures give rise to chemically variable porous architectures, which have been studied extensively due to their sorption and separation potential. More recently, catalytic applications have been proposed that make use of their chemical tunability, while reports of negative linear compressibility and negative thermal expansion have further expanded interest in the field. Amorphous metal-organic frameworks (aMOFs) retain the basic building blocks and connectivity of their crystalline counterparts, though they lack any long-range periodic order. Aperiodic arrangements of atoms result in their X-ray diffraction patterns being dominated by broad "humps" caused by diffuse scattering and thus they are largely indistinguishable from one another. Amorphous MOFs offer many exciting opportunities for practical application, either as novel functional materials themselves or facilitating other processes, though the domain is largely unexplored (total aMOF reported structures amounting to under 30). Specifically, the use of crystalline MOFs to detect harmful guest species before subsequent stress-induced collapse and guest immobilization is of considerable interest, while functional luminescent and optically active glass-like materials may also be prepared in this manner. The ion transporting capacity of crystalline MOFs might be improved during partial structural collapse, while there are possibilities of preparing superstrong glasses and hybrid liquids during thermal amorphization. The tuning of release times of MOF drug delivery vehicles by partial structural collapse may be possible, and aMOFs are often more mechanically robust than crystalline materials, which is of importance for industrial applications. In this Account, we describe the preparation of aMOFs by introduction of disorder into their parent crystalline frameworks through heating, pressure (both hydrostatic and nonhydrostatic), and ball-milling. The main method of characterizing these amorphous materials (analysis of the pair distribution function) is summarized, alongside complementary techniques such as Raman spectroscopy. Detailed investigations into their properties (both chemical and mechanical) are compiled and compared with those of crystalline MOFs, while the impact of the field on the processing techniques used for crystalline MOF powders is also assessed. Crucially, the benefits amorphization may bring to existing proposed MOF applications are detailed, alongside the possibilities and research directions afforded by the combination of the unique properties of the amorphous domain with the versatility of MOF chemistry.
Logo Image Based Approach for Phishing Detection
Phishing is a cyber attack which involves a fake website mimicking the some real legitimate website. The website makes the user believe the website being authentic and thus online user provides their sensitive information like password, PIN, Social Security Number, and Credit Card Information etc. Due to involvement of such high sensitivity information, these websites are a huge threat to online users and detection and blocking of such website become crucial. In this thesis, we propose a new phishing detection method to protect the internet users from such attacks. In particular, given a website, our proposed method will be able to detect between a phishing website and a legitimate website just by the screenshot of the logo image of it. Due to the usage of screenshot for extracting the logo, any hidden logo will not be able to spoof the algorithm into considering the website as phishing as happened in existing methods. In first study focus was on dataset gathering and then the logo image is extracted. This logo image is uploaded to Google image search engine using automated script which returns the URLs associated with that image. Since the relationship between logo and domain name is exclusive it is reasonable to treat the logo image as identity of original URL. Hence the phishing website will not have the same relation to the logo image as such and will not get returned as URL by Google when search for that logo image. Further, Alexa page rank is also used to strengthen the detection accuracy.
Learning Regularized LDA by Clustering
As a supervised dimensionality reduction technique, linear discriminant analysis has a serious overfitting problem when the number of training samples per class is small. The main reason is that the between- and within-class scatter matrices computed from the limited number of training samples deviate greatly from the underlying ones. To overcome the problem without increasing the number of training samples, we propose making use of the structure of the given training data to regularize the between- and within-class scatter matrices by between- and within-cluster scatter matrices, respectively, and simultaneously. The within- and between-cluster matrices are computed from unsupervised clustered data. The within-cluster scatter matrix contributes to encoding the possible variations in intraclasses and the between-cluster scatter matrix is useful for separating extra classes. The contributions are inversely proportional to the number of training samples per class. The advantages of the proposed method become more remarkable as the number of training samples per class decreases. Experimental results on the AR and Feret face databases demonstrate the effectiveness of the proposed method.
Computing Semantic Relatedness Using Wikipedia-based Explicit Semantic Analysis
Computing semantic relatedness of natural language texts requires access to vast amounts of common-sense and domain-specific world knowledge. We propose Explicit Semantic Analysis (ESA), a novel method that represents the meaning of texts in a high-dimensional space of concepts derived from Wikipedia. We use machine learning techniques to explicitly represent the meaning of any text as a weighted vector of Wikipedia-based concepts. Assessing the relatedness of texts in this space amounts to comparing the corresponding vectors using conventional metrics (e.g., cosine). Compared with the previous state of the art, using ESA results in substantial improvements in correlation of computed relatedness scores with human judgments: from r = 0.56 to 0.75 for individual words and from r = 0.60 to 0.72 for texts. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.
Gender differences in personality: a meta-analysis.
Four meta-analyses were conducted to examine gender differences in personality in the literature (1958-1992) and in normative data for well-known personality inventories (1940-1992). Males were found to be more assertive and had slightly higher self-esteem than females. Females were higher than males in extraversion, anxiety, trust, and, especially, tender-mindedness (e.g., nurturance). There were no noteworthy sex differences in social anxiety, impulsiveness, activity, ideas (e.g., reflectiveness), locus of control, and orderliness. Gender differences in personality traits were generally constant across ages, years of data collection, educational levels, and nations.
Management of Service Level Agreements for Cloud Services in IoT: A Systematic Mapping Study
Cloud computing and Internet of Things (IoT) are computing technologies that provide services to consumers and businesses, allowing organizations to become more agile and flexible. Therefore, ensuring quality of service (QoS) through service-level agreements (SLAs) for such cloud-based services is crucial for both the service providers and service consumers. As SLAs are critical for cloud deployments and wider adoption of cloud services, the management of SLAs in cloud and IoT has thus become an important and essential aspect. This paper investigates the existing research on the management of SLAs in IoT applications that are based on cloud services. For this purpose, a systematic mapping study (a well-defined method) is conducted to identify the published research results that are relevant to SLAs. This paper identifies 328 primary studies and categorizes them into seven main technical classifications: SLA management, SLA definition, SLA modeling, SLA negotiation, SLA monitoring, SLA violation and trustworthiness, and SLA evolution. This paper also summarizes the research types, research contributions, and demographic information in these studies. The evaluation of the results shows that most of the approaches for managing SLAs are applied in academic or controlled experiments with limited industrial settings rather than in real industrial environments. Many studies focus on proposal models and methods to manage SLAs, and there is a lack of focus on the evolution perspective and a lack of adequate tool support to facilitate practitioners in their SLA management activities. Moreover, the scarce number of studies focusing on concrete metrics for qualitative or quantitative assessment of QoS in SLAs urges the need for in-depth research on metrics definition and measurements for SLAs.
Antimicrobial effect of cinnamon (Cinnamomum verum J. Presl) bark essential oil in cream-filled cakes and pastries
Background and objectives: Food poisoning has been always a major concern in health system of every community and cream-filled products are one of the most widespread food poisoning causes in humans. In present study, we examined the preservative effect of the cinnamon oil in cream-filled cakes. Methods: Antimicrobial activity of Cinnamomum verum J. Presl (Cinnamon) bark essential oil was examined against five food-borne pathogens (Staphylococcus aureus, Escherichia coli, Candida albicans, Bacillus cereus and Salmonella typhimurium) to investigate its potential for use as a natural preservative in cream-filled baked goods. Chemical constituents of the oil were determined by gas chromatography/mass spectrometry. For evaluation of preservative sufficiency of the oil, pathogens were added to cream-filled cakes manually and 1 μL/mL of the essential oil was added to all samples except the blank. Results: Chemical constituents of the oil were determined by gas chromatography/mass spectrometry and twenty five components were identified where cinnamaldehyde (79.73%), linalool (4.08%), cinnamaldehyde para-methoxy (2.66%), eugenol (2.37%) and trans-caryophyllene (2.05%) were the major constituents. Cinnamon essential oil showed strong antimicrobial activity against selected pathogens in vitro and the minimum inhibitory concentration values against all tested microorganisms were determined as 0.5 μL/disc except for S. aureus for which, the oil was not effective in tested concentrations. After baking, no observable microorganism was observed in all susceptible microorganisms count in 72h stored samples. Conclusion: It was concluded that by analysing the sensory quality of the preserved food, cinnamon oil may be considered as a natural preservative in food industry, especially for cream-filled cakes and
Cloud-Based Commissioning of Constrained Devices using Permissioned Blockchains
In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.
Misinformation and Its Correction: Continued Influence and Successful Debiasing.
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation. We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread. We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people's memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing. We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners-including journalists, health professionals, educators, and science communicators-design effective misinformation retractions, educational tools, and public-information campaigns.
Selections: Internet Voting with Over-the-Shoulder Coercion-Resistance
We present Selections, a new cryptographic voting protocol that is end-to-end verifiable and suitable for Internet voting. After a one-time in-person registration, voters can cast ballots in an arbitrary number of elections. We say a system provides over-the-shoulder coercionresistance if a voter can undetectably avoid complying with an adversary that is present during the vote casting process. Our system is the first in the literature to offer this property without the voter having to anticipate coercion and precompute values. Instead, a voter can employ a panic password. We prove that Selections is coercion-resistant against a non-adaptive adversary. 1 Introductory Remarks From a security perspective, the use of electronic voting machines in elections around the world continues to be concerning. In principle, many security issues can be allayed with cryptography. While cryptographic voting has not seen wide deployment, refined systems like Prêt à Voter [11,28] and Scantegrity II [9] are representative of what is theoretically possible, and have even seen some use in governmental elections [7]. Today, a share of the skepticism over electronic elections is being apportioned to Internet voting.1 Many nation-states are considering, piloting or using Internet voting in elections. In addition to the challenges of verifiability and ballot secrecy present in any voting system, Internet voting adds two additional constraints: • Untrusted platforms: voters should be able to reliably cast secret ballots, even when their devices may leak information or do not function correctly. • Unsupervised voting: coercers or vote buyers should not be able to exert undue influence over voters despite the open environment of Internet voting. As with electronic voting, cryptography can assist in addressing these issues. The study of cryptographic Internet voting is not as mature. Most of the literature concentrates on only one of the two problems (see related work in Section 1.2). In this paper, we are concerned with the unsupervised voting problem. Informally, a system that solves it is said to be coercion-resistant. Full version available: http://eprint.iacr.org/2011/166 1 One noted cryptographer, Ronald Rivest, infamously opined that “best practices for Internet voting are like best practices for drunk driving” [25]. G. Danezis (Ed.): FC 2011, LNCS 7035, pp. 47–61, 2012. c © Springer-Verlag Berlin Heidelberg 2012 48 J. Clark and U. Hengartner
Multi-agent systems - an introduction to distributed artificial intelligence
This edition is a translation of the book formerly published in French in 1995 (Les systèmes multi-agents: Vers une intelligence collective, Inter Editions, Paris.) Even now, it is still the main reference for the French research community in multi-agent systems (MAS). The book is intended to be both a state of the art text and an introduction for people who are interested in capturing the main ideas of MAS. It deals mainly with the theoretical background, antecedents and applications. I will present a summary of the main ideas and the points that the author highlights as novel features arising from the multi-agent approach to computer science. When some details are very well developed in the book, I will just refer to them, since they are mainly useful for people who actually want to apply the techniques.
Design and control of hybrid power and propulsion systems for smart ships : A review of developments
The recent trend to design more efficient and versatile ships has increased the variety in hybrid propulsion and power supply architectures. In order to improve performance with these architectures, intelligent control strategies are required, while mostly conventional control strategies are applied currently. First, this paper classifies ship propulsion topologies into mechanical, electrical and hybrid propulsion, and power supply topologies into combustion, electrochemical, stored and hybrid power supply. Then, we review developments in propulsion and power supply systems and their control strategies, to subsequently discuss opportunities and challenges for these systems and the associated control. We conclude that hybrid architectures with advanced control strategies can reduce fuel consumption and emissions up to 10–35%, while improving noise, maintainability, manoeuvrability and comfort. Subsequently, the paper summarises the benefits and drawbacks, and trends in application of propulsion and power supply technologies, and it reviews the applicability and benefits of promising advanced control strategies. Finally, the paper analyses which control strategies can improve performance of hybrid systems for future smart and autonomous ships and concludes that a combination of torque, angle of attack, and Model Predictive Control with dynamic settings could improve performance of future smart and more
Résumé automatique de texte avec un algorithme d'ordonnancement
Résumé: Dans cet article, nous proposons une nouvelle approche pour le résumé automatique de textes utilisant un algorithme d'apprentissage numérique spécifique à la tâche d'ordonnancement. L'objectif est d'extraire les phrases d'un document qui sont les plus représentatives de son contenu. Pour se faire, chaque phrase d'un document est représentée par un vecteur de scores de pertinence, où chaque score est un score de similarité entre une requête particulière et la phrase considérée. L'algorithme d'ordonnancement effectue alors une combinaison linéaire de ces scores, avec pour but d'affecter aux phrases pertinentes d'un document des scores supérieurs à ceux des phrases non pertinentes du même document. Les algorithmes d'ordonnancement ont montré leur efficacité en particulier dans le domaine de la méta-recherche, et leur utilisation pour le résumé est motivée par une analogie peut être faite entre la méta-recherche et le résumé automatique qui consiste, dans notre cas, à considérer les similarités des phrases avec les différentes requêtes comme étant des sorties de différents moteurs de recherche. Nous montrons empiriquement que l'algorithme d'ordonnancement a de meilleures performances qu'une approche utilisant un algorithme de classification sur deux corpus distincts.
Neural Clinical Paraphrase Generation with Attention
Paraphrase generation is important in various applications such as search, summarization, and question answering due to its ability to generate textual alternatives while keeping the overall meaning intact. Clinical paraphrase generation is especially vital in building patient-centric clinical decision support (CDS) applications where users are able to understand complex clinical jargons via easily comprehensible alternative paraphrases. This paper presents Neural Clinical Paraphrase Generation (NCPG), a novel approach that casts the task as a monolingual neural machine translation (NMT) problem. We propose an end-to-end neural network built on an attention-based bidirectional Recurrent Neural Network (RNN) architecture with an encoderdecoder framework to perform the task. Conventional bilingual NMT models mostly rely on word-level modeling and are often limited by out-of-vocabulary (OOV) issues. In contrast, we represent the source and target paraphrase pairs as character sequences to address this limitation. To the best of our knowledge, this is the first work that uses attention-based RNNs for clinical paraphrase generation and also proposes an end-to-end character-level modeling for this task. Extensive experiments on a large curated clinical paraphrase corpus show that the attention-based NCPG models achieve improvements of up to 5.2 BLEU points and 0.5 METEOR points over a non-attention based strong baseline for word-level modeling, whereas further gains of up to 6.1 BLEU points and 1.3 METEOR points are obtained by the character-level NCPG models over their word-level counterparts. Overall, our models demonstrate comparable performance relative to the state-of-the-art phrase-based non-neural models.
Forebrain PENK and PDYN gene expression levels in three inbred strains of mice and their relationship to genotype-dependent morphine reward sensitivity
Vulnerability to drug abuse disorders is determined not only by environmental but also by genetic factors. A body of evidence suggests that endogenous opioid peptide systems may influence rewarding effects of addictive substances, and thus, their individual expression levels may contribute to drug abuse liability. The aim of our study was to assess whether basal genotype-dependent brain expression of opioid propeptides genes can influence sensitivity to morphine reward. Experiments were performed on inbred mouse strains C57BL/6J, DBA/2J, and SWR/J, which differ markedly in responses to morphine administration: DBA/2J and SWR/J show low and C57BL/6J high sensitivity to opioid reward. Proenkephalin (PENK) and prodynorphin (PDYN) gene expression was measured by in situ hybridization in brain regions implicated in addiction. The influence of the κ opioid receptor antagonist nor-binaltorphimine (nor-BNI), which attenuates effects of endogenous PDYN-derived peptides, on rewarding actions of morphine was studied using the conditioned place preference (CPP) paradigm. DBA/2J and SWR/J mice showed higher levels of PDYN and lower levels of PENK messenger RNA in the nucleus accumbens than the C57BL/6J strain. Pretreatment with nor-BNI enhanced morphine-induced CPP in the opioid-insensitive DBA/2J and SWR/J strains. Our results demonstrate that inter-strain differences in PENK and PDYN genes expression in the nucleus accumbens parallel sensitivity of the selected mouse strains to rewarding effects of morphine. They suggest that high expression of PDYN may protect against drug abuse by limiting drug-produced reward, which may be due to dynorphin-mediated modulation of dopamine release in the nucleus accumbens.
A cost-minimization algorithm for fast location tracking in mobile wireless networks
Location tracking is one of the most important issues in providing real-time applications over wireless networks due to its effect to quality of service (QoS), such as end-to-end delay, bandwidth utilization, and connection dropping probability. In this paper, we study cost minimization for locating mobile users under delay constraints in mobile wireless networks. Specifically, a new location tracking algorithm is developed to determine the position of mobile terminals under delay constraints, while minimizing the average locating cost based on a unimodal property. We demonstrate that the new algorithm not only results in minimum locating cost, but also has a lower computational complexity compared to existing algorithms. Furthermore, detailed searching procedures are discussed under both deterministic and statistic delay bounds. Numerical results for a variety of location probability distributions show that our algorithm compares favorably with existing algorithms. 2005 Elsevier B.V. All rights reserved.
Neural Adaptation Layers for Cross-domain Named Entity Recognition
Recent research efforts have shown that neural architectures can be effective in conventional information extraction tasks such as named entity recognition, yielding state-of-the-art results on standard newswire datasets. However, despite significant resources required for training such models, the performance of a model trained on one domain typically degrades dramatically when applied to a different domain, yet extracting entities from new emerging domains such as social media can be of significant interest. In this paper, we empirically investigate effective methods for conveniently adapting an existing, well-trained neural NER model for a new domain. Unlike existing approaches, we propose lightweight yet effective methods for performing domain adaptation for neural models. Specifically, we introduce adaptation layers on top of existing neural architectures, where no re-training using the source domain data is required. We conduct extensive empirical studies and show that our approach significantly outperforms stateof-the-art methods.
Entity Linking: An Issue to Extract Corresponding Entity With Knowledge Base
Entity linking is a task to extract query mentions in documents, and then link them to their corresponding entities in a knowledge base. It can improve the performances of knowledge network construction, knowledge fusion, information retrieval, natural language processing, and knowledge base population. In this paper, we introduce the difficulties and applications of entity linking and focus on the main methods to address this issue. At last, we list the knowledge bases, data sets, and the evaluation criterion and some challenges of entity linking.
Deep Background Modeling Using Fully Convolutional Network
Background modeling plays an important role for video surveillance, object tracking, and object counting. In this paper, we propose a novel deep background modeling approach utilizing fully convolutional network. In the network block constructing the deep background model, three atrous convolution branches with different dilate are used to extract spatial information from different neighborhoods of pixels, which breaks the limitation that extracting spatial information of the pixel from fixed pixel neighborhood. Furthermore, we sample multiple frames from original sequential images with increasing interval, in order to capture more temporal information and reduce the computation. Compared with classical background modeling approaches, our approach outperforms the state-of-art approaches both in indoor and outdoor scenes.
On the Individuality of Fingerprints: Models and Methods
Fingerprint individuality is the study of the extent of uniqueness of fingerprints and is the central premise of expert testimony in court. A forensic expert testifies whether a pair of fingerprints is either a match or non-match by comparing salient features of the fingerprint pair. However, the experts are rarely questioned on the uncertainty associated with the match: How likely is the observed match between the fingerprint pair due to just random chance? The main concern with the admissibility of fingerprint evidence is that the matching error rates (i.e., the fundamental error rates of matching by the human expert) are unknown. The problem of unknown error rates is also prevalent in other modes of identification such as handwriting, lie detection, etc. Realizing this, the U.S. Supreme Court, in the 1993 case of Daubert vs. Merrell Dow Pharmaceuticals, ruled that forensic evidence presented in a court is subject to five principles of scientific validation, namely whether (i) the particular technique or methodology has been subject to statistical hypothesis testing, (ii) its error rates has been established, (iii) standards controlling the technique’s operation exist and have been maintained, (iv) it has been peer reviewed, and (v) it has a general widespread acceptance. Following Daubert, forensic evidence based on fingerprints was first challenged in the 1999 case of USA vs. Byron Mitchell based on the “known error rate” condition 2 mentioned above, and subsequently, in 20 other cases involving fingerprint evidence. The establishment of matching error rates is directly related to the extent of fingerprint individualization. This article gives an overview of the problem of fingerprint individuality, the challenges faced and the models and methods that have been developed to study this problem. Related entries: Fingerprint individuality, fingerprint matching automatic, fingerprint matching manual, forensic evidence of fingerprint, individuality. Definitional entries: 1.Genuine match: This is the match between two fingerprint images of the same person. 2. Impostor match: This is the match between a pair of fingerprints from two different persons. 3. Fingerprint individuality: It is the study of the extent of which different fingerprints tend to match with each other. It is the most important measure to be judged when fingerprint evidence is presented in court as it reflects the uncertainty with the experts’ decision. 4. Variability: It refers to the differences in the observed features from one sample to another in a population. The differences can be random, that is, just by chance, or systematic due to some underlying factor that governs the variability.
The Motivation to Serve Others: Exploring Relations to Career Development
The current study explored the relation between service motivation, or the desire to serve others through one’s future career, and vocational outcomes across two studies. In the first study, using a sample of 225 undergraduate students, an instrument was developed to measure service motivation that demonstrated convergent and discriminant validity, strong internal consistency reliability, and strong test–retest reliability. In the second study, with a sample of 265 undergraduate students, service motivation was found to correlate positively with career decision self-efficacy, career adaptability, and career optimism and to correlate negatively with career indecision. Post hoc analyses found career optimism to fully mediate the relationship between service motivation and career indecision. These findings suggest that students who feel a stronger desire to use their future career to serve others will be more optimistic regarding their career future. Implications for research and practice are considered.
Self-reported nonadherence to antiretroviral therapy as a predictor of viral failure and mortality.
OBJECTIVE To determine the effect of nonadherence to antiretroviral therapy (ART) on virologic failure and mortality in naive individuals starting ART. DESIGN Prospective observational cohort study. METHODS Eligible individuals enrolled in the Swiss HIV Cohort Study, started ART between 2003 and 2012, and provided adherence data on at least one biannual clinical visit. Adherence was defined as missed doses (none, one, two, or more than two) and percentage adherence (>95, 90-95, and <90) in the previous 4 weeks. Inverse probability weighting of marginal structural models was used to estimate the effect of nonadherence on viral failure (HIV-1 viral load >500 copies/ml) and mortality. RESULTS Of 3150 individuals followed for a median 4.7 years, 480 (15.2%) experienced viral failure and 104 (3.3%) died, 1155 (36.6%) reported missing one dose, 414 (13.1%) two doses and, 333 (10.6%) more than two doses of ART. The risk of viral failure increased with each missed dose (one dose: hazard ratio [HR] 1.15, 95% confidence interval 0.79-1.67; two doses: 2.15, 1.31-3.53; more than two doses: 5.21, 2.96-9.18). The risk of death increased with more than two missed doses (HR 4.87, 2.21-10.73). Missing one to two doses of ART increased the risk of viral failure in those starting once-daily (HR 1.67, 1.11-2.50) compared with those starting twice-daily regimens (HR 0.99, 0.64-1.54, interaction P = 0.09). Consistent results were found for percentage adherence. CONCLUSION Self-report of two or more missed doses of ART is associated with an increased risk of both viral failure and death. A simple adherence question helps identify patients at risk for negative clinical outcomes and offers opportunities for intervention.
Interactions between the microbiota, immune and nervous systems in health and disease
The diverse collection of microorganisms that inhabit the gastrointestinal tract, collectively called the gut microbiota, profoundly influences many aspects of host physiology, including nutrient metabolism, resistance to infection and immune system development. Studies investigating the gut–brain axis demonstrate a critical role for the gut microbiota in orchestrating brain development and behavior, and the immune system is emerging as an important regulator of these interactions. Intestinal microbes modulate the maturation and function of tissue-resident immune cells in the CNS. Microbes also influence the activation of peripheral immune cells, which regulate responses to neuroinflammation, brain injury, autoimmunity and neurogenesis. Accordingly, both the gut microbiota and immune system are implicated in the etiopathogenesis or manifestation of neurodevelopmental, psychiatric and neurodegenerative diseases, such as autism spectrum disorder, depression and Alzheimer's disease. In this review, we discuss the role of CNS-resident and peripheral immune pathways in microbiota–gut–brain communication during health and neurological disease.
An iterative maximum-likelihood polychromatic algorithm for CT
A new iterative maximum-likelihood reconstruction algorithm for X-ray computed tomography is presented. The algorithm prevents beam hardening artifacts by incorporating a polychromatic acquisition model. The continuous spectrum of the X-ray tube is modeled as a number of discrete energies. The energy dependence of the attenuation is taken into account by decomposing the linear attenuation coefficient into a photoelectric component and a Compton scatter component. The relative weight of these components is constrained based on prior material assumptions. Excellent results are obtained for simulations and for phantom measurements. Beam-hardening artifacts are effectively eliminated. The relation with existing algorithms is discussed. The results confirm that improving the acquisition model assumed by the reconstruction algorithm results in reduced artifacts. Preliminary results indicate that metal artifact reduction is a very promising application for this new algorithm.
On Multi-Relational Link Prediction With Bilinear Models
We study bilinear embedding models for the task of multirelational link prediction and knowledge graph completion. Bilinear models belong to the most basic models for this task, they are comparably efficient to train and use, and they can provide good prediction performance. The main goal of this paper is to explore the expressiveness of and the connections between various bilinear models proposed in the literature. In particular, a substantial number of models can be represented as bilinear models with certain additional constraints enforced on the embeddings. We explore whether or not these constraints lead to universal models, which can in principle represent every set of relations, and whether or not there are subsumption relationships between various models. We report results of an independent experimental study that evaluates recent bilinear models in a common experimental setup. Finally, we provide evidence that relation-level ensembles of multiple bilinear models can achieve state-of-the art prediction performance.
Cholesterol metabolism in patients with hemodialysis in the presence or absence of coronary artery disease.
BACKGROUND Little is known about the interrelationship between the lipid profile, cholesterol metabolism, and coronary risk factors in patients with hemodialysis (HD) in the presence or absence of coronary artery disease (CAD). METHODS AND RESULTS Ninety-five patients with HD were selected (HD group). Fifty-eight age-, gender-, and body mass index (BMI)-matched patients who had at least 1 cardiovascular risk factor were selected as a non-HD group. Total cholesterol (TC), triglyceride, high-density lipoprotein cholesterol (HDL-C), low-density lipoprotein cholesterol (LDL-C), non-HDL-C, and the ratio of LDL-C to HDL-C (L/H) in the HD group were significantly lower than those in the non-HD group. All markers of cholesterol absorption (campesterol/TC, sitosterol/TC, and cholestanol/TC) and the ratio of campesterol to lathosterol in the HD group were significantly higher. In addition, in the HD group, L/H was negatively correlated with lathosterol/TC, campesterol/TC, sitosterol/TC, and cholestanol/TC. Finally, CAD was significantly associated with lathosterol/TC (P=0.028), which was positively associated with BMI in the HD group, whereas CAD was significantly associated only with hypertension (P=0.020) in the non-HD group. CONCLUSIONS HD patients showed lower cholesterol concentrations than non-HD patients, and, as compensation, their cholesterol absorption might be accelerated. However, higher cholesterol synthesis, which was correlated with higher BMI, might be an independent predictor for the presence of CAD in HD patients.
DESIGN ANALYSIS OF HIGH GAIN WIDEBAND L-PROBE FED MICROSTRIP PATCH ANTENNA
A new high gain wideband L-probe fed inverted EE-H shaped slotted (LEE-H) microstrip patch antenna is presented in this paper. The design adopts contemporary techniques; L-probe feeding, inverted patch structure with air-filled dielectric, and EE-H shaped patch. The integration of these techniques leads to a new patch antenna with a low profile as well as useful operational features, as the broadband and high gain. The measured result showed satisfactory performance with achievable impedance bandwidth of 21.15% at 10 dB return loss (VSWR ≤ 2) and a maximum gain of 9.5 dBi. The antenna exhibits stable radiation pattern in the entire operating band.
Theory and Experiments on Vector Quantized Autoencoders
Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQVAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete bottleneck with EM helps us achieve better image generation results on CIFAR-10, and together with knowledge distillation, allows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.
Large-scale analytics of dynamics of choice among discrete alternatives
This talk will discuss the theory of discrete choice with a particular focus on aspects that are of interest to practitioners of large-scale data mining and analysis. We'll look at some example types of choice problems, including geographic choice as in restaurant selection, repeated sequential choice as in music listening, and the induction of nested models of choice.
Algorithms for multi-armed bandit problems
The stochastic multi-armed bandit problem is an important model for studying the explorationexploitation tradeoff in reinforcement learning. Although many algorithms for the problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as -greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. These properties are not described by current theory, even though they can be exploited in practice in the design of heuristics. Thirdly, the algorithms’ performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50% more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies.
Next-generation Intrusion Detection Expert System (NIDES)A Summary
What is NIDES? Intrusion detection system that performs real-time monitoring of user activity Performs 2 types of analysis(statistical and rule-based) Statistical analysis-maintains historical profile of a user and raises an alarm when observed behavior differs from established patterns of use
Measuring KMS success: A respecification of the DeLone and McLean's model
We proposed and empirically assessed a KMS success model. This was derived through an analysis of current practice of knowledge management and review of IS success literature. Five variables (system quality, knowledge or information quality, perceived KMS benefits, user satisfaction, and system use) were used as dependent variables in evaluating KMS success, and their interrelationships were suggested and empirically tested. The results provide an expanded understanding of the factors that measure KMS success and implications of this work are discussed. # 2006 Elsevier B.V. All rights reserved.