title
stringlengths
8
300
abstract
stringlengths
0
10k
Compact Root Bilinear CNNs for Content-Based Image Retrieval
Convolutional Neural Networks (CNNs) have recently demonstrated a superior performance in computer vision applications; including image retrieval. This paper introduces a bilinear CNN-based model for the first time in the context of Content-Based Image Retrieval (CBIR). The proposed architecture consists of two feature extractors using a pre-trained deep CNN model fine-tuned for image retrieval task to generate a Compact Root Bilinear CNN (CRB-CNN) architecture. Image features are directly extracted from the activations of convolutional layers then pooled at image locations. Additionally, the output size of bilinear features is largely reduced to a compact but high descriminative image representation using kernal-based low-dimensional projection and pooling, which is a fundamental improvement in the retrieval performance in terms of search speed and memory size. An end-to-end training is applied by back-probagation to learn the parameters of the final CRB-CNN. Experimental results reported on the standard Holidays image dataset show the efficiency of the architecture at extracting and learning even complex features for CBIR tasks. Specifically, using a vector of 64-dimension, it achieves 95.13% mAP accuracy and outperforms the best results of state-of-the-art approaches.
A framework to improve communication during the requirements elicitation process in GSD projects
Achieving a shared understanding of requirements is difficult in any situation, even more so in global software development projects. In such environments, people must deal not only with the lack of face to face communication, but also with other issues such as time difference, cultural diversity and a large amount of information originating from different sources throughout the world. Obtaining the right requirements therefore implies extra effort. In order to minimize such problems, we propose a framework that focuses on analyzing the factors that may be problematic in global software development and which suggests a set of strategies to improve the requirements elicitation process in such environments. In this paper, we describe the different phases of our framework and present the results of an experiment that test part of this framework. The results indicate that applying some of the strategies proposed in the framework seems to positively affect the stakeholders’ satisfaction with regard to communication. Moreover, the quality of the written software requirements specifications seems to be better as well when using those strategies.
Charagram: Embedding Words and Sentences via Character n-grams
We present CHARAGRAM embeddings, a simple approach for learning character-based compositional models to embed textual sequences. A word or sentence is represented using a character n-gram count vector, followed by a single nonlinear transformation to yield a low-dimensional embedding. We use three tasks for evaluation: word similarity, sentence similarity, and part-of-speech tagging. We demonstrate that CHARAGRAM embeddings outperform more complex architectures based on character-level recurrent and convolutional neural networks, achieving new state-of-the-art performance on several similarity tasks. 1
Ultra-Low Power Radiation Hardened by Design Memory Circuits
A 3218 bit ultra-low power radiation-hardened by design (RHBD) register file is fabricated on a 130-nm bulk CMOS technology. Register file readout circuitry allows functionality down to . Dual interlocked cell (DICE) storage provides SEU immunity above in accelerated heavy ion testing. This memory is compared to a larger one using identical ultra low voltage circuit design techniques, but un-hardened, i.e., with conventional latch storage and using only two-edge transistor layout and no guard rings. The un-hardened ultra low voltage memory exhibits 100 lower leakage post-irradiation to 500 krad(Si), when irradiated and measured at , than when irradiated and measured with . Hence, for ultra-low power, ultra-low circuits, TID hardening techniques may be unnecessary. Read energy dissipated by the RHBD memory is 10.3 fJ per bit per operation when operated at 320 mV. The maximum operating frequency is 5 MHz at the same supply voltage.
ACE: abstracting, characterizing and exploiting peaks and valleys in datacenter power consumption
Peak power management of datacenters has tremendous cost implications. While numerous mechanisms have been proposed to cap power consumption, real datacenter power consumption data is scarce. To address this gap, we collect power demands at multiple spatial and fine-grained temporal resolutions from the load of geo-distributed datacenters of Microsoft over 6 months. We conduct aggregate analysis of this data, to study its statistical properties. With workload characterization a key ingredient for systems design and evaluation, we note the importance of better abstractions for capturing power demands, in the form of peaks and valleys. We identify and characterize attributes for peaks and valleys, and important correlations across these attributes that can influence the choice and effectiveness of different power capping techniques. With the wide scope of exploitability of such characteristics for power provisioning and optimizations, we illustrate its benefits with two specific case studies.
Emotional persistence in online chatting communities
How do users behave in online chatrooms, where they instantaneously read and write posts? We analyzed about 2.5 million posts covering various topics in Internet relay channels, and found that user activity patterns follow known power-law and stretched exponential distributions, indicating that online chat activity is not different from other forms of communication. Analysing the emotional expressions (positive, negative, neutral) of users, we revealed a remarkable persistence both for individual users and channels. I.e. despite their anonymity, users tend to follow social norms in repeated interactions in online chats, which results in a specific emotional "tone" of the channels. We provide an agent-based model of emotional interaction, which recovers qualitatively both the activity patterns in chatrooms and the emotional persistence of users and channels. While our assumptions about agent's emotional expressions are rooted in psychology, the model allows to test different hypothesis regarding their emotional impact in online communication.
Dynamics of cytotoxic T cell subsets during immunotherapy predicts outcome in acute myeloid leukemia
Preventing relapse after chemotherapy remains a challenge in acute myeloid leukemia (AML). Eighty-four non-transplanted AML patients in first complete remission received relapse-preventive immunotherapy with histamine dihydrochloride and low-dose interleukin-2 in an international phase IV trial (ClinicalTrials.gov; NCT01347996). Blood samples were drawn during cycles of immunotherapy and analyzed for CD8+ (cytotoxic) T cell phenotypes in blood. During the first cycle of therapy, a re-distribution of cytotoxic T cells was observed comprising a reduction of T effector memory cells and a concomitant increase of T effector cells. The dynamics of T cell subtypes during immunotherapy prognosticated relapse and survival, in particular among older patients and remained significantly predictive of clinical outcome after correction for potential confounders. Presence of CD8+ T cells with specificity for leukemia-associated antigens identified patients with low relapse risk. Our results point to novel aspects of T cell-mediated immunosurveillance in AML and provide conceivable biomarkers in relapse-preventive immunotherapy.
Antitumor effect of intratumoral administration of dendritic cell combination with vincristine chemotherapy in a murine fibrosarcoma model.
A new antitumor therapeutic strategy utilizing the combined effect of chemotherapy and DC (dendritic cell)-based immunotherapy was designed, and the effect of intratumoral injections of unpulsed, immature DCs was evaluated after in vivo pretreatment of vincristine on tumor growth in a murine fibrosarcoma tumor model. Vincristine exerted a much more potent apoptosis/necrosis-inducing effect on MCA-102 tumor cells than on DCs both in vitro and in vivo. Moreover, CD11c, CD40, CD80 and CD86 molecules on DCs were not downregulated after treatment with vincristine either in vitro or in vivo. The growth of tumor significantly regressed in the group which received the combined vincristine chemotherapy with intratumoral administration of DCs in contrast to the untreated group, the group treated with DCs alone, and the group treated with vincristine alone. In particular, an upregulated expression of CD40, CD80 and CD86 molecules on DCs was found in the combination treatment group. Furthermore, the number of CD4+ and CD8+ T cells and the staining intensity of their CD4 and CD8 surface molecules also increased after the combination treatment. Therefore, our results indicate the feasibility of this combination therapy with vincristine chemotherapy and DC-based immunotherapy as an efficient antitumor strategy for the treatment of fibrosarcoma.
Home blood pressure and cardiovascular outcomes in patients during antihypertensive therapy: primary results of HONEST, a large-scale prospective, real-world observational study.
UNLABELLED This study aimed to investigate the relationship between on-treatment morning home blood pressure (HBP) and incidence of cardiovascular events using data from the Home Blood Pressure Measurement With Olmesartan Naive Patients to Establish Standard Target Blood Pressure (HONEST) study, a prospective observational study of 21 591 outpatients with essential hypertension (mean age, 64.9 years; women, 50.6%) enrolled between 2009 and 2010 at clinics and hospitals in Japan. They received olmesartan-based treatment throughout. The primary end point was major cardiovascular events. After a mean follow-up period of 2.02 years, cardiovascular events occurred in 280 patients (incidence, 6.46/1000 patient-years). The risk for the primary end point was significantly higher in patients with on-treatment morning HBP ≥145 to <155 mm Hg (hazard ratio [HR], 1.83; 95% confidence interval [CI], 1.12-2.99) and ≥155 mm Hg (HR, 5.03; 95% CI, 3.05-8.31) than <125 mm Hg and with on-treatment clinic blood pressure ≥150 to <160 mm Hg (HR, 1.69; 95% CI, 1.10-2.60) and ≥160 mm Hg (HR, 4.38; 95% CI, 2.84-6.75) than <130 mm Hg. Morning HBP associated with minimum risk was 124 mm Hg by spline regression analysis. Cardiovascular risk was increased in patients with morning HBP ≥145 mm Hg and clinic blood pressure <130 mm Hg (HR, 2.47; 95% CI, 1.20-5.08) compared with morning HBP <125 mm Hg and clinic blood pressure <130 mm Hg. In conclusion, it is essential to control morning HBP to <145 mm Hg, even in patients with controlled clinic blood pressure. CLINICAL TRIAL REGISTRATION URL http://www.umin.ac.jp/ctr/index.htm. UMIN Clinical Trials Registry, trial No. UMIN000002567.
A static analyzer for finding dynamic programming errors
interpretation [ 11,12] is a form of program analysis that maps programs into more abstract domains. This makes analysis more tractable and potentially useful for checking. The technique requires safety and completeness, however; analysis must be correct for all possible inputs. It also has difficulty in practice with large programs. In contrast, error checking, to be of practical value, must be able to handle large programs. Furthermore, error messages need not always be correct as long as the number and type of spurious ones are below usability thresholds. In contrast to the above techniques, the debugging tool Purify [ 13] and similar runtime memory debuggers detect a broad range of errors and require no extra programmer effort to use. They are, however, debuggers, operating on heavily instrumented executables (see, for example, [ 14]) and requiring test cases, which impose serious limitations. Thus, the goal of the research reported here was to develop a source code analyzer that could find Purify-like errors with Purify’s ease of use, but without needing test cases. This goal led to a few specific requirements. • Real world programs written in C and C++ should be checked effectively . Analysis must therefore handle such difficulties as pointers, arrays, aliasing, structs and unions, bit field operations, global and static variables, loops, gotos, third party libraries, recursive and mutuallyrecursive functions, pointer arithmetic, arbitrary casting (including between pointer and integer types), and overloaded operators and templates (for C++). Copyright 2000 John Wiley & Sons, Ltd. Softw. Pract. Exper. 2000;30:775–802 FINDING DYNAMIC PROGRAMMING ERRORS 777 • Information should be derived from the program text rather than acquired through user annotations. This is possible because the semantics of a language imply certain consistency rules, and violations of these rules can be identified as defects. For example, the semantics of local variables allow for the detection of defects such as using uninitialized memory. • Analysis should be limited to achievable paths; that is, sequences of program execution which can actually occur in practice . This requires detailed tracking of actual values, not just performing dataand control-flow analysis. • The information produced from the analysis should be enough to allow a user to characterize the underlying defects easily . This is especially important, and hard to achieve, with large programs. In response to these goals, a new method of analysis was developed, based on simulating the execution of individual functions. The method can be summarized in a few key concepts. • Simulation specifically consists of sequentially tracing distinct execution paths through the function being analyzed, and simulating the action of each operator and function call on the path on an underlying virtual machine. By tracking the state of memory during path execution, and applying the consistency rules of the language to each operation, inconsistencies can be detected and reported. In addition, by examining the current state of memory whenever a conditional is encountered, the analysis can be restricted to achievable paths. Because of the detailed tracking of paths and values, precise information is available to help the user understand the situation in which the defect manifests itself. • The behavior of a function is described as a set of conditionals, consistency rules and expression evaluations. This summary of the behavior is called a model of the function. Whenever a function call is encountered during path execution, the model for that function is used to determine which operations to apply. • The information produced while simulating a function is sufficient to generate a model for that function automatically. • To apply these techniques to an entire program, or subset of a program, analysis begins with the leaf functions of the call graph and proceeds bottom-up to the root. As each function in turn is simulated, defects are identified and reported, and the model for that function is available for subsequent simulation of its callers. • This bottom-up approach uses a function’s implementation to generate constraints on the callers of that function. This is particularly valuable in situations where the text of the complete program is not available, either because the program is only partially implemented, or because the code under analysis is designed as a component that may fit into many different programs. An error detection tool for C and C++, called PREfix, was built based on these techniques. It has been used on several large commercial programs. The remainder of this paper discusses in detail the operation of PREfix and presents some experience with it.
Living Together: Mind and Machine Intelligence
In this paper we consider the nature of the machine intelligences we have created in the context of our human intelligence. We suggest that the fundamental difference between human and machine intelligence comes down to embodiment factors. We define embodiment factors as the ratio between an entity’s ability to communicate information vs compute information. We speculate on the role of embodiment factors in driving our own intelligence and consciousness. We briefly review dual process models of cognition and cast machine intelligence within that framework, characterising it as a dominant System Zero, which can drive behaviour through interfacing with us subconsciously. Driven by concerns about the consequence of such a system we suggest prophylactic courses of action that could be considered. Our main conclusion is that it is not sentient intelligence we should fear but non-sentient intelligence.
Comparison of a 1450-nm diode laser and a 1320-nm Nd:YAG laser in the treatment of atrophic facial scars: a prospective clinical and histologic study.
BACKGROUND Atrophic scar revision techniques, although numerous, have been hampered by inadequate clinical responses and prolonged postoperative recovery periods. Nonablative laser treatment has been shown to effect significant dermal collagen remodeling with minimal posttreatment sequelae. Although many studies have been published regarding the effectiveness of these nonablative lasers on rhytides, there are limited data demonstrating their specific effects on atrophic scars. OBJECTIVE To evaluate and compare the efficacy and safety of long-pulsed 1320-nm Nd:YAG and 1450-nm diode lasers in the treatment of atrophic facial scarring. METHODS A series of 20 patients (skin phototypes I-V) with mild to moderate atrophic facial acne scars randomly received three successive monthly treatments with a long-pulsed 1320-nm Nd:YAG laser on one facial half and a long-pulsed 1450-nm diode laser on the contralateral facial half. Patients were evaluated using digital photography and three-dimensional in vivo microtopography measurements at each treatment visit and at 1, 3, 6, and 12 months postoperatively. Histologic evaluations of cutaneous biopsies obtained before treatment, immediately after the first treatment, and at 1, 3, 6, and 12 months after the third treatment were performed. Clinical assessment scores were determined at each treatment session and follow-up visit. Patient satisfaction surveys were obtained at the end of the study. RESULTS Mild to moderate clinical improvement was observed after the series of three treatments in the majority of patients studied. Patient satisfaction scores and in vivo microtopography measurements paralleled the photographic and histopathologic changes seen. Side effects of treatment were limited to mild transient erythema, edema, and hyperpigmentation. No scarring or adverse textural changes resulted from the use of either laser system. CONCLUSIONS Nonablative long-pulsed 1320-nm Nd:YAG and 1450-nm diode lasers each offer clinical improvement for patients with atrophic scarring without significant side effects or complications. The 1450-nm diode laser showed greater clinical scar response at the parameters studied. The use of nonablative laser systems is a good treatment alternative for patients with atrophic scarring who are unable or unwilling to endure the prolonged postoperative recovery process associated with ablative laser skin resurfacing procedures.
Design and simulation for a hydraulic actuated quadruped robot †
This paper describes the mechanical configuration of a quadruped robot firstly. Each of the four legs consists of three rotary joints. All joints of the robot are actuated by linear hydraulic servo cylinders. Then it deduces the forward and inverse kinematic equations for four legs with D-H transformation matrices. Furthermore, it gives a composite foot trajectory composed of cubic curve and straight line, which greatly reduces the velocity and acceleration fluctuations of the torso along forward and vertical directions. Finally, dynamics cosimulation is given with MSC.ADAMS and MATLAB. The results of co-simulation provide important guidance to mechanism design and parameters preference for the linear hydraulic servo cylinders.
Application of wrapper approach and composite classifier to the stock trend prediction
The research on the stock market prediction has been more popular in recent years. Numerous researchers tried to predict the immediate future stock prices or indices based on technical indices with various mathematical models and machine learning techniques such as artificial neural networks (ANN), support vector machines (SVM) and ARIMA models. Although some researches in the literature exhibit satisfactory prediction achievement when the average percentage error and root mean square error are used as the performance metrics, the prediction accuracy of whether stock market goes or down is seldom analyzed. This paper employs wrapper approach to select the optimal feature subset from original feature set composed of 23 technical indices and then uses voting scheme that combines different classification algorithms to predict the trend in Korea and Taiwan stock markets. Experimental result shows that wrapper approach can achieve better performance than the commonly used feature filters, such as v-Statistic, Information gain, ReliefF, Symmetrical uncertainty and CFS. Moreover, the proposed voting scheme outperforms single classifier such as SVM, kth nearest neighbor, back-propagation neural network, decision tree, and logistic regression. 2007 Elsevier Ltd. All rights reserved.
Diverse Concept-Level Features for Multi-Object Classification
We consider the problem of image classification with semantic features that are built from a set of base classifier outputs, each reflecting visual concepts. However, existing approaches consider visual concepts independently from each other whereas they are often linked together. When those relations are considered, existing models strongly rely on image low-level features, yielding in irrelevant relations when the low-level representation fails. On the contrary, the approach we propose, uses existing human knowledge, the application context itself and the human categorization mechanism to reflect complex relations between concepts. By nesting this human knowledge and the application context in the concept detection and selection processes, our final semantic feature captures the most useful information for an effective categorization. Thus, it enables to give good representation, even if some important concepts failed to be recognized. Experimental validation is conducted on three publicly available benchmarks of multi-class object classification and leads to results that outperforms comparable approaches.
Identification through Visual Clustering
Phishing is a form of online fraud, where criminals exploit the implicit trust placed in an organisation’s website branding to acquire sensitive information from the organisation’s customers. Major browser vendors race through hundreds of thousands of spam emails a day, assessing each URL to find Phishing websites to add to their blacklists. The quantity of suspect sites is so large that automated classification is essential. Creators of Phishing sites are aware of the popular text-based classifiers and implement a variety of countermeasures to disguise the site from software while remaining recognisable to humans. Consequently researchers are exploring using computer vision to better recognise Phishing sites. We present Distinctive Region Classifier a novel approach to identify the targets of potential Phishing attacks from screenshots. It is a multi-class classifier that combines computer vision techniques with density-based clustering to identify a page by elements that are indicative of a single class in our training set. Our system performs comparably with published approaches in this space. Additionally, we present resilient distributed implementation of a visual classifier and blacklist. We find our cluster accelerates classification to a rate equal to commercial anti-Phishing measures.
Exploiting the use of recurrent neural networks for driver behavior profiling
Driver behavior affects traffic safety, fuel/energy consumption and gas emissions. The purpose of driver behavior profiling is to understand and have a positive influence on driver behavior. Driver behavior profiling tasks usually involve an automated collection of driving data and the application of computer models to classify what characterizes the aggressiveness of drivers. Different sensors and classification methods have been employed for this task, although low-cost solutions, high performance and collaborative sensing remain open questions for research. This paper makes an investigation with different Recurrent Neural Networks (RNN), aiming to classify driving events employing data collected by smartphone accelerometers. The results show that specific configurations of RNN upon accelerometer data provide high accuracy results, being a step towards the development of safer transportation systems.
Gravitational Wave Sources May Be "Further" Than We Think
It has been argued that the energy content in time varying spacetimes can be obtained by using the approximate Lie symmetries of the geodesics equations in that spacetime. When applied to cylindrical gravitational waves, it gives a self-damping of the waves. According to this proposal the energy of the waves go to zero asymptotically as the radial distance to the two-thirds power. If true, this would mean that the estimates for the sensitivity of the detectors for the various sources would have to be revised
Adenosine versus regadenoson comparative evaluation in myocardial perfusion imaging: results of the ADVANCE phase 3 multicenter international trial.
BACKGROUND Earlier phase 1 and 2 studies have shown that regadenoson has desirable features as a stress agent for myocardial perfusion imaging. METHODS AND RESULTS This multicenter, double-blinded phase 3 trial involved 784 patients at 54 sites. Each patient underwent 2 sets of gated single photon emission computed tomography myocardial perfusion imaging studies: an initial qualifying study with adenosine and a subsequent randomized study with either regadenoson (2/3 of patients) or adenosine. Regadenoson was administered as a rapid bolus (<10 seconds) of 400 mug. The primary endpoint was to demonstrate noninferiority by showing that the difference in the strength of agreement in detecting reversible defects, based on blinded reading, between sequential adenosine-regadenoson images and adenosine-adenosine images, lay above a prespecified noninferiority margin. Other prospectively defined safety and tolerability comparisons and supporting analyses were also performed. The average agreement rate based on the median of 3 independent blinded readers was 0.63 +/- 0.03 for regadenoson-adenosine and 0.64 +/- 0.04 for adenosine-adenosine-a 1% absolute difference with the lower limit of the 95% confidence interval lying above the prespecified noninferiority margin. Side-by-side interpretation of regadenoson and adenosine images provided comparable results for detecting reversible defects. The peak increase in heart rate was greater with regadenoson than adenosine, but the blood pressure nadir was similar. A summed symptom score of flushing, chest pain, and dyspnea was less with regadenoson than adenosine (P = .013). CONCLUSIONS This phase 3 trial shows that regadenoson provides diagnostic information comparable to a standard adenosine infusion. There were no serious drug-related side effects, and regadenoson was better tolerated than adenosine.
Ultrasound guided posterior femoral cutaneous nerve block.
The posterior femoral cutaneous nerve (PFCN) is a branch of the sacral plexus. It needs to be implemented as a complementary block for anesthesia or in the surgeries necessitating tourniquet in the suitable cases. We consider target oriented block concept within the PFCN block in the anesthesia implementations with the emergence of ultrasonic regional anesthesia in the practice and with the better understanding of sonoanatomy.
Diagnosing Error in Temporal Action Detectors
Despite the recent progress in video understanding and the continuous rate of improvement in temporal action localization throughout the years, it is still unclear how far (or close?) we are to solving the problem. To this end, we introduce a new diagnostic tool to analyze the performance of temporal action detectors in videos and compare different methods beyond a single scalar metric. We exemplify the use of our tool by analyzing the performance of the top rewarded entries in the latest ActivityNet action localization challenge. Our analysis shows that the most impactful areas to work on are: strategies to better handle temporal context around the instances, improving the robustness w.r.t. the instance absolute and relative size, and strategies to reduce the localization errors. Moreover, our experimental analysis finds the lack of agreement among annotator is not a major roadblock to attain progress in the field. Our diagnostic tool is publicly available to keep fueling the minds of other researchers with additional insights about their algorithms.
Socioeconomic factors and breast cancer in black and white Americans
The incidence of breast cancer in the US is known to be higher among white than black women and among women of higher socioeconomic status (SES), but once a woman, either black or white, has the disease, she is more likely to have a recurrence and to die of breast cancer if she is of lower socioeconomic status. Explanations for these observed differences are varied and inconsistent making it clear that these reported differentials are not sufficiently understood. In understanding breast cancer in a multicultural setting, delay in diagnosis, follow-up, and treatment are frequently the focus of attention. However these factors do not sufficiently explain the observed differences between blacks and whites. A review of recent literature reveals an increasing focus on the role of SES in breast cancer etiology and progression; however, the confounding of SES with race/ethnicity (black vs. white) contributes to the insufficient understanding of the effect of these two factors. This report will focus on the interplay between race/ethnicity and SES and their relative effects upon analyses of survival from breast cancer. Findings are based on prospective clinical trial data. SES factors have been associated with most of the known or suspected risk factors for breast cancer incidence and progression. In addition to race/ethnicity, SES is also associated with diet, lifestyle factors, physical characteristics, and tumor characteristics. Without controlling for other risk factors, the ratios of risk for blacks with respect to whites for disease-free survival and overall survival were 1.30 (95% CI: 1.04–1.61) and 1.42 (95% CI: 1.15–1.76), respectively. However, after controlling for patient risk factors, such as the number of positive lymph nodes, tumor diameter, estrogen receptor status and socioeconomic factors, these differences decrease and are not statistically significant. Socioeconomic status is associated both with race/ethnicity and estrogen receptor status. A loglinear analysis demonstrates that the apparent association of race/ethnicity with estrogen receptor status is mediated by socioeconomic status. An implication of this finding is that environmental and lifestyle components rather than genetic factors associated with race may explain the observed differentials between black and white breast cancer patients. Knowledge of environmental factors associated with SES have the potential for providing important clues about the prevention and control of breast cancer.
The NK₁ receptor antagonist aprepitant does not alter the pharmacokinetics of high-dose melphalan chemotherapy in patients with multiple myeloma.
AIMS The objective of this investigation was to assess the effect of aprepitant on the pharmacokinetics of high-dose melphalan used as conditioning therapy before blood stem cell transplantation in multiple myeloma. METHODS Aprepitant (125 mg) or placebo was administered 1 h before melphalan therapy (1 h infusion of 100 mg m⁻²). Eleven plasma samples were obtained over 8 h and melphalan was quantified using an LC/MS/MS method. Standard pharmacokinetic parameters were calculated and nonparametric testing was applied to assess the differences between aprepitant and placebo treatment. RESULTS Twenty patients received placebo and 10 patients aprepitant treatment. There were no differences observed for C(max) at the end of melphalan infusion (placebo 3431 ± 608 ng ml⁻¹ vs. aprepitant 3269 ± 660 ng ml⁻¹). In addition, AUC and terminal elimination half-life were not changed by aprepitant. Total clearance of melphalan was 304 ± 58 ml min⁻¹ m⁻² (placebo) which was not influenced by aprepitant (288 ± 78 ml min⁻¹ m⁻²). CONCLUSIONS The administration of the NK₁ receptor antagonist aprepitant 1 h before a high-dose chemotherapy does not influence the exposure and the elimination of melphalan. Therefore, oral administration of 125 mg aprepitant 1 h before melphalan infusion does not alter the disposition of intravenously administered melphalan.
Cytogenetic studies of Hynobiidae (Urodela). XIV. Analysis of the chromosome of a Chinese salamander, Batrachuperus pinchonii (David)
The chromosome number of a Chinese salamander, Batrachuperus pinchonii, was re-examined. Adults and embryonic specimens had a diploid number of 66, with 33 bivalents during meiosis, in contrast to previous reported results. Furthermore, when C-banding analysis was performed with embryos, chromosomes with banding patterns homoeologous to those of Salamandrella keyserlingii and Hynobius species were found. It appears, therefore, that Batrachuperus, Salamandrella and Hynobius might be derived from a common ancestral species in eastern Asia.
Exploring Implicit Hierarchical Structures for Recommender Systems
Items in real-world recommender systems exhibit certain hierarchical structures. Similarly, user preferences also present hierarchical structures. Recent studies show that incorporating the explicit hierarchical structures of items or user preferences can improve the performance of recommender systems. However, explicit hierarchical structures are usually unavailable, especially those of user preferences. Thus, there’s a gap between the importance of hierarchical structures and their availability. In this paper, we investigate the problem of exploring the implicit hierarchical structures for recommender systems when they are not explicitly available. We propose a novel recommendation framework HSR to bridge the gap, which enables us to capture the implicit hierarchical structures of users and items simultaneously. Experimental results on two real world datasets demonstrate the effectiveness of the proposed framework.
Prospective evaluation of quality of life effects in patients undergoing palliative radiotherapy for brain metastases
Recently published results of quality of life (QoL) studies indicated different outcomes of palliative radiotherapy for brain metastases. This prospective multi-center QoL study of patients with brain metastases was designed to investigate which QoL domains improve or worsen after palliative radiotherapy and which might provide prognostic information. From 01/2007-01/2009, n=151 patients with previously untreated brain metastases were recruited at 14 centers in Germany and Austria. Most patients (82 %) received whole-brain radiotherapy. QoL was measured with the EORTC-QLQ-C15-PAL and brain module BN20 before the start of radiotherapy and after 3 months. At 3 months, 88/142 (62 %) survived. Nine patients were not able to be followed up. 62 patients (70.5 % of 3-month survivors) completed the second set of questionnaires. Three months after the start of radiotherapy QoL deteriorated significantly in the areas of global QoL, physical function, fatigue, nausea, pain, appetite loss, hair loss, drowsiness, motor dysfunction, communication deficit and weakness of legs. Although the use of corticosteroid at 3 months could be reduced compared to pre-treatment (63 % vs. 37 %), the score for headaches remained stable. Initial QoL at the start of treatment was better in those alive than in those deceased at 3 months, significantly for physical function, motor dysfunction and the symptom scales fatigue, pain, appetite loss and weakness of legs. In a multivariate model, lower Karnofsky performance score, higher age and higher pain ratings before radiotherapy were prognostic of 3-month survival. Moderate deterioration in several QoL domains was predominantly observed three months after start of palliative radiotherapy for brain metastases. Future studies will need to address the individual subjective benefit or burden from such treatment. Baseline QoL scores before palliative radiotherapy for brain metastases may contain prognostic information.
A benchmark for comparison of dental radiography analysis algorithms
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/).
Seismic-stratigraphic framework of the forearc basin off central Sumatra, Sunda Arc
Abstract New multichannel seismic reflection data provide information on the stratigraphic framework and geologic history of the forearc basin west of central Sumatra. We recognize six seismic-stratigraphic sequences that reflect the Cenozoic history and development of the outer continental shelf and forearc basin southeast of Nias Island. These sequences indicate several episodes of uplift of the subduction complex and filling of the forearc basin. Early in the development of this margin, Paleogene slope deposits prograded onto the adjacent basin floor. Onlapping this assemblage are two units interpreted as younger Paleogene(?) trough deposits. Uplift associated with rejuvenation of subduction in the late Oligocene led to erosion of the Sumatra shelf and formation of a regional unconformity. The early Miocene was a period of significant progradation. A Miocene limestone unit partly downlaps and partly onlaps the older Paleogene deposits. It is characterized by shallow shelf and oblique progradational facies passing into basin floor facies. A buried reef zone occurs near the shelf edge. The cutting of an erosional unconformity on the shelf and slope in late Miocene/early Pliocene time culminated this episode of deposition. In the late Pliocene, a large flexure developed at the western boundary of the basin, displacing the outer-arc ridge upward relative to the basin. Over 1 km of Pliocene to Recent sediment was deposited as a wedge in the deep western portion of the basin landward of the outer-arc ridge. These deposits are characterized by flat-lying, high-amplitude, continuous reflections that overstep the late Miocene unconformity. Up to 800 m of shallow-water limestone have been deposited on the shelf since mid-Pliocene time.
FASHION , FICTION , FUNCTION : MEDIATING WEARABLE DESIGN THROUGH FASHION FILM
Fashion technology is an expanding field, yet the question of how technology can be considered fashionable remains unexplored. According to fashion theories, mediation plays a fundamental role in transforming clothing items into fashionable garments. In this study, we explored how fashion films, as one of the most important fashion media in the industry today, could make wearable design concepts fashionable by merging aesthetics, experience and fiction. By synthesizing research in fashion studies and Human-Computer Interaction (HCI), we sketch out a framework for producing fashion film for wearables. We then describe our own process of making a fashion film for a fictional concept, and reflect on our process of using the framework. The contribution of the study includes: 1) proposing fashion films as means of mediating wearable design concepts; 2) advocating a balance between fashion, fiction and function in fashion technology mediation; and 3) foregrounding the discussion of design mediation as part of the design process broadly. RELATIONS
Improbotics: Exploring the Imitation Game Using Machine Intelligence in Improvised Theatre
Theatrical improvisation (impro or improv) is a demanding form of live, collaborative performance. Improv is a humorous and playful artform built on an open-ended narrative structure which simultaneously celebrates effort and failure. It is thus an ideal test bed for the development and deployment of interactive artificial intelligence (AI)-based conversational agents, or artificial improvisors. This case study introduces an improv show experiment featuring human actors and artificial improvisors. We have previously developed a deeplearning-based artificial improvisor, trained on movie subtitles, that can generate plausible, context-based, lines of dialogue suitable for theatre (Mathewson and Mirowski 2017b). In this work, we have employed it to control what a subset of human actors say during an improv performance. We also give human-generated lines to a different subset of performers. All lines are provided to actors with headphones and all performers are wearing headphones. This paper describes a Turing test, or imitation game, taking place in a theatre, with both the audience members and the performers left to guess who is a human and who is a machine. In order to test scientific hypotheses about the perception of humans versus machines we collect anonymous feedback from volunteer performers and audience members. Our results suggest that rehearsal increases proficiency and possibility to control events in the performance. That said, consistency with real world experience is limited by the interface and the mechanisms used to perform the show. We also show that human-generated lines are shorter, more positive, and have less difficult words with more grammar and spelling mistakes than the artificial improvisor generated lines.
Top 10 algorithms in data mining
This paper presents the top 10 data mining algorithms identified by the IEEE International Conference on Data Mining (ICDM) in December 2006: C4.5, k-Means, SVM, Apriori, EM, PageRank, AdaBoost, kNN, Naive Bayes, and CART. These top 10 algorithms are among the most influential data mining algorithms in the research community. With each algorithm, we provide a description of the algorithm, discuss the impact of the algorithm, and review current and further research on the algorithm. These 10 algorithms cover classification, clustering, statistical learning, association analysis, and link mining, which are all among the most important topics in data mining research and development.
First-line FOLFIRI and bevacizumab in patients with advanced colorectal cancer prospectively stratified according to serum LDH: final results of the GISCAD (Italian Group for the Study of Digestive Tract Cancers) CENTRAL (ColorEctalavastiNTRiAlLdh) trial
Background:Previous findings suggested that bevacizumab might be able to improve response rate (RR) in colorectal cancer patients with high lactic dehydrogenase (LDH) basal levels.Methods:We conducted a phase II trial to prospectively ascertain whether bevacizumab in combination with FOLFIRI could have an improved clinical activity in patients with high LDH serum levels. Primary end point of the study was RR; secondary end points were median overall survival and median progression-free survival (mPFS).Results:A total of 81 patients were enrolled. No difference in terms of ORR (39% vs 31% for low vs high LDH level stratum, P=0.78) and mPFS (14.16 vs 10.29 months, HR: 1.07, 95% CI: 0.51–2.24, P=0.83) between the strata was observed, whereas overall survival (OS) was significantly longer for patients with low LDH (24.85 vs 15.14 months, HR: 4.08, 95% CI: 1.14–14.61, P=0.0004). In a not-pre-planned exploratory analysis using different cut-off ranges for LDH, we observed RR up to 70%, with no improvement in progression-free survival or OS.Conclusions:The CENTRAL trial failed to demonstrate that high LDH levels were related to a significantly improved RR in patients receiving first-line FOLFIRI and bevacizumab. The LDH serum levels should then no further be investigated as a predictive factor in this setting.
Strategi Komunikasi Membangun Kemandirian Pangan
Communication strategy in developing food independency Role of agricultural development communication is important to build food self-suffiency and diversification as the main base of food independency and food security. Food independency will be accomplished if its development comes from people initiative as an awareness to build modern farm industry with effective and efficient communication strategy support. Adoption of innovation technology by means of communication will boost productivity and product quality, decrease loss of production, increase value added of production with farmer empowerment and participation approach, and strengthen farmers' institutions and competitiveness. To empower the farmers, development of single commodity agribusiness cooperation such as rice or maize will facilitate transformation of information on technology and farm management from variety of sources for the farmers. Government policy to develop centers of agricultural information at production centers as agribusiness development area is required to build food independency and food diversification based on local production with effective communication system support. Communication information system based on cooperation and social capital with stakeholders partnership approaches (government, businessmen, university, research and development institutions, social institutions, etc) will accelerate accomplishment of food independency in suburb areas.
Extension of Bernstein polynomials to infinite dimensional case
The purpose of this paper is to study some new concrete approximation processes for continuous vector-valued mappings defined on the infinite dimensional cube or on a subset of a real Hilbert space. In both cases these operators are modelled on classical Bernstein polynomials and represent a possible extension to an infinite dimensional setting.The same idea is generalized to obtain from a given approximation process for function defined on a real interval a new approximation process for vector-valued mappings defined on subsets of a real Hilbert space.
Cheilitis Glandularis of Both Lips: Successful Treatment with a Combination of an Intralesional Steroid Injection and Tacrolimus Ointment
Cheilitis glandularis (CG) is an inflammatory condition of unknown cause that predominantly affects the minor salivary glands of the lips. Although a diagnosis of CG is not difficult, its treatment is a challenge. This article highlights the clinical presentation of the disease together with a case of successful management of this disease using a combination of a steroid injection followed by a topical immunosuppressor.
Active specific immunotherapy for stage II and stage III human colon cancer: a randomised trial
BACKGROUND Colon cancer is curable by surgery, but cure rate depends on the extent of disease. We investigated whether adjuvant active specific immunotherapy (ASI) with an autologous tumour cell-BCG vaccine with surgical resection was more beneficial than resection alone in stage II and III colon cancer. METHODS In a prospective randomised trial, 254 patients with colon cancer were randomly assigned postoperative ASI or no adjuvant treatment. ASI was three weekly vaccinations starting 4 weeks after surgery, with a booster vaccination at 6 months with 10(7) irradiated autologous tumour cells. The first vaccinations contained 10(7) BCG organisms. We followed up patients for time to recurrence, and recurrence-free and overall survival. Analysis was by intention to treat. FINDINGS The 5.3 year median follow-up (range 8 months to 8 years 11 months) showed 44% (95% CI 7-66) risk reduction for recurrence in the recurrence-free period in all patients receiving ASI (p=0.023). Overall, there were 40 recurrences in the control group and 25 in the ASI group. Analysis by stage showed no significant benefit of ASI in stage III disease. The major impact of ASI was seen in patients with stage II disease, with a significantly longer recurrence-free period (p=0.011) and 61% (18-81) risk reduction for recurrences. Recurrence-free survival was significantly longer with ASI (42% risk reduction for recurrence or death [0-68], p=0.032) and there was a trend towards improved overall survival. INTERPRETATION ASI gave significant clinical benefit in surgically resected patients with stage II colon cancer. ASI has minimal adverse reactions and should be considered in the management of stage II colon cancer.
Word Sense Disambiguation for Free-text Indexing Using a Massive Semantic Network
Semantics-free, word-based information retrieval is thwarted by two complementary problems. First, search for relevant documents returns irrelevant items when all meanings of a search term are used, rather than just the meaning intended. This causes low precision. Second, relevant items are missed when they are indexed not under the actual search terms, but rather under related terms. This causes low recall. With semantics-free approaches there is generally no way to improve both precision and recall at the same time. Word sense disambiguation during document indexing should improve precision. We have investigated using the massive Word Net semantic network for disambigu at ion during indexing. With the unconstrained text of the SMART ret rieval environment, we have had to derive our own content description from the input text, given only part-ofspeech tagging of the input. We employ the notion of semantic distance between network nodes. Input text terms with multiple senses are disambiguated by finding the combination of senses from a set of contiguous terms which minimizes total pairwise dist ante between senses. Results so far have been encouraging. Improvement in disamblguation compared with chance is clear
Tuberculosis and Indoor Biomass and Kerosene Use in Nepal: A Case–Control Study
BACKGROUND In Nepal, tuberculosis (TB) is a major problem. Worldwide, six previous epidemiologic studies have investigated whether indoor cooking with biomass fuel such as wood or agricultural wastes is associated with TB with inconsistent results. OBJECTIVES Using detailed information on potential confounders, we investigated the associations between TB and the use of biomass and kerosene fuels. METHODS A hospital-based case-control study was conducted in Pokhara, Nepal. Cases (n = 125) were women, 20-65 years old, with a confirmed diagnosis of TB. Age-matched controls (n = 250) were female patients without TB. Detailed exposure histories were collected with a standardized questionnaire. RESULTS Compared with using a clean-burning fuel stove (liquefied petroleum gas, biogas), the adjusted odds ratio (OR) for using a biomass-fuel stove was 1.21 [95% confidence interval (CI), 0.48-3.05], whereas use of a kerosene-fuel stove had an OR of 3.36 (95% CI, 1.01-11.22). The OR for use of biomass fuel for heating was 3.45 (95% CI, 1.44-8.27) and for use of kerosene lamps for lighting was 9.43 (95% CI, 1.45-61.32). CONCLUSIONS This study provides evidence that the use of indoor biomass fuel, particularly as a source of heating, is associated with TB in women. It also provides the first evidence that using kerosene stoves and wick lamps is associated with TB. These associations require confirmation in other studies. If using kerosene lamps is a risk factor for TB, it would provide strong justification for promoting clean lighting sources, such as solar lamps.
Multiagent Systems Engineering
This paper describes the Multiagent Systems Engineering (MaSE) methodology. MaSE is a general purpose, methodology for developing heterogeneous multiagent systems. MaSE uses a number of graphically based models to describe system goals, behaviors, agent types, and agent communication interfaces. MaSE also provides a way to specify architecture-independent detailed definition of the internal agent design. An example of applying the MaSE methodology is also presented.
International Journal of Intellectual Advancements and Research in Engineering Computations Automatic generation of presentation slides for academic papers
Generally to get the slides from begin, it takes plenty of your time. These slides contains information concerning base paper objective from abstract details of system i.e. introduction and to boot approach utilized, writing audit from connected work section, alpha results and conclusions from base paper. The generated slides will be used as rough plan for any preparation. This helps presenters in making ready their formal slides in quicker manner. Some rough structure for slide shows from papers capable to save lots of the author abundant time once organizing shows. during this paper we tend to investigate totally different perspective for educational papers slide generation. to put in writing the slides from scratch takes plenty of your time of presenter. they typically contain many sections like abstract, introduction, connected work, projected methodology, experiments and conclusions. to keep up individuation in making ready slides this concept is crucial and distinctive. every section from the tutorial paper is known and is aligned to 1 or a lot of slides. each bullet purpose are mapped with the slide heading purpose. Out of the many sentences below that within that heading sentences importance is calculated thus on keep those because it is within the slides. Keywords— Abstracting methods, Integer Linear Programming, Support Vector Regression model, text mining. I.INTRODUCTION Presentation slides are the one in every of necessary approach of learning. during this approach of learning get the response from the listeners they'll be students, staff or client etc. Presenters produce their go along victimization the code tools like Microsoft Powerpurpose, Open workplace etc.All these code presenters ought to sort the content into the slide then it will be long task. during this projected methodology automatic slides are generated consistent with the sections within the paper i.e, titles in tutorial paper and corresponding relevant sentences from identical paper. It helps users in obtaining a rough structure of the tutorial paper. It continually have the same structure. they typically contain many sections like abstract, introduction, connected work, projected methodology, experiments and conclusions. though presentation slides will be written in numerous ways in which by totally different presenters, a presenter, particularly for a beginner, continually aligns slides consecutive with the paper sections once making ready the slides. every section is aligned to 1 or a lot of slides and one slide sometimes contains a title and a number of other sentences. These sentences is also enclosed in some bullet points. Our methodology tries to get draft slides of the standard sort mentioned higher than and helps individuals to arrange their final slides. Automatic slides generation for educational. papers could be a terribly difficult task. Current strategies typically extract objects like sentences from the paper to construct the slides. In distinction to the short outline extracted by a summarization system, the slides are needed to be rather more structured and far longer. Slides will be divided into an ordered sequence of components. Each part addresses a particular topic and these topics also are relevant to every different. typically speaking, automatic slide generation is far harder than summarization. Slides sometimes not solely have text components however additionally graph components like figures and tables. however our work focuses on the text components solely. during this study, we tend to propose a novel system referred to as PPSGen to get well-structured presentation slides for educational papers. In our system, the importance of every sentence in a very paper is learned by victimization the Support Vector Regression (SVR) model, and so the presentation slides for the paper area unit generated by victimization the integer linear programming (ILP) model to pick out and align key phrases and sentences. The rest of this paper is organized as follows. connected work is introduced in section 2. we tend to describe our methodology well in section 3. we tend to show the experiment leads to section 4 and conclude our add section 5. II.RELATED WORK Automatic slides generation for tutorial papers remains way under-investigated today. Few studies directly analysis on the ISSN:2348-2079
Cross-lingual Transfer for Unsupervised Dependency Parsing Without Parallel Data
Cross-lingual transfer has been shown to produce good results for dependency parsing of resource-poor languages. Although this avoids the need for a target language treebank, most approaches have still used large parallel corpora. However, parallel data is scarce for low-resource languages, and we report a new method that does not need parallel data. Our method learns syntactic word embeddings that generalise over the syntactic contexts of a bilingual vocabulary, and incorporates these into a neural network parser. We show empirical improvements over a baseline delexicalised parser on both the CoNLL and Universal Dependency Treebank datasets. We analyse the importance of the source languages, and show that combining multiple source-languages leads to a substantial improvement.
Control of soft pneumatic finger-like actuators for affective motion generation
This paper investigates the design and implementation of a finger-like robotic structure capable of reproducing human hand gestural movements performed by a multi-fingered, hand-like structure. In this work, we present a pneumatic circuit and a closed-loop controller for a finger-like soft pneumatic actuator. Experimental results demonstrate the performance of the pneumatic and control systems of the soft pneumatic actuator, and its ability to track human movement trajectories with affective content.
Architecture and implementation for 3 D Neural Network based image compression
Image compression is one of the key image processing techniques in signal processing and communication systems. Compression of images leads to reduction of storage space and reduces transmission bandwidth and hence also the cost. Advances in VLSI technology are rapidly changing the technological needs of common man. One of the major technological domains that are directly related to mankind is image compression. Neural networks can be used for image compression. Neural network architectures have proven to be more reliable, robust, and programmable and offer better performance when compared with classical techniques. In this work the main focus is on development of new architectures for hardware implementation of 3-D neural network based image compression optimizing area, power and speed as specific to ASIC implementation, and comparison with FPGA.
Relational-Realizational Parsing
State-of-the-art statistical parsing models applied to free word-order languages tend to underperform compared to, e.g., parsing English. Constituency-based models often fail to capture generalizations that cannot be stated in structural terms, and dependency-based models employ a ‘single-head’ assumption that often breaks in the face of multiple exponence. In this paper we suggest that the position of a constituent is aform manifestation of its grammatical function, one among various possible means of realization. We develop the Relational-Realizational approach to parsing in which we untangle theprojection of grammatical functions and their means of realization to allow for phrase-structure variability and morphological-syntactic interaction. We empirically demonstrate the application of our approach to parsing Modern Hebrew, obtaining 7% error reduction from previously reported results.
Predicting the functional states of human iPSC-derived neurons with single-cell RNA-seq and electrophysiology
Human neural progenitors derived from pluripotent stem cells develop into electrophysiologically active neurons at heterogeneous rates, which can confound disease-relevant discoveries in neurology and psychiatry. By combining patch clamping, morphological and transcriptome analysis on single-human neurons in vitro, we defined a continuum of poor to highly functional electrophysiological states of differentiated neurons. The strong correlations between action potentials, synaptic activity, dendritic complexity and gene expression highlight the importance of methods for isolating functionally comparable neurons for in vitro investigations of brain disorders. Although whole-cell electrophysiology is the gold standard for functional evaluation, it often lacks the scalability required for disease modeling studies. Here, we demonstrate a multimodal machine-learning strategy to identify new molecular features that predict the physiological states of single neurons, independently of the time spent in vitro. As further proof of concept, we selected one of the potential neurophysiological biomarkers identified in this study—GDAP1L1—to isolate highly functional live human neurons in vitro.
A Phase-Locked-Loop-Assisted Internal Model Adjustable-Speed Controller for BLDC Motors
In this paper, a phase-locked-loop (PLL)-assisted internal model (IM) adjustable-speed controller for brushless DC (BLDC) motors is proposed. Major contributions include proposing a new adjustable-speed module structure of PLL controllers for BLDC motor drives and integrating an IM control strategy and a novel complete motor current sensing scheme. By employing the proposed PLL-assisted IM control method, the controller can be made more robust and accurate. In addition, due to properly integrating the motor current sensing scheme with the pulsewidth modulation control, the hardware implementation of the BLDC motor drive can be made rather compact and enable further integration into chips to reduce the cost and enhance the current regulation performance. Finally, a prototype of the proposed controller is also constructed and applied to an industrial blower to verify the feasibility of the proposed control. Experimental results show that both accurate steady state and fast transient speed responses can be achieved. Moreover, compared with the conventional induction motor drives for industrial blowers, the proposed PMBLDC drives can achieve much higher efficiency with less volume and weight.
Congenitally missing maxillary lateral incisors: treatment.
The 2 major treatment approaches for congenitally missing maxillary lateral incisors are space closure via orthodontic therapy, or space opening to allow prosthodontic replacements either with a fixed prosthesis or single-tooth implant. Both of these treatment approaches can potentially compromise aesthetics, periodontal health, and function. It is essential for an interdisciplinary dental specialty team to establish realistic treatment objectives, communicate the sequence of treatment, interact during treatment, evaluate dental and gingival aesthetics, and position teeth to permit proper prosthetic treatment. If this interdisciplinary approach is used, the aesthetics and long-term dental health of the patient following treatment will be greatly enhanced.
Mental Health in Lesbian, Gay, Bisexual, and Transgender (LGBT) Youth.
Today's lesbian, gay, bisexual, and transgender (LGBT) youth come out at younger ages, and public support for LGBT issues has dramatically increased, so why do LGBT youth continue to be at high risk for compromised mental health? We provide an overview of the contemporary context for LGBT youth, followed by a review of current science on LGBT youth mental health. Research in the past decade has identified risk and protective factors for mental health, which point to promising directions for prevention, intervention, and treatment. Legal and policy successes have set the stage for advances in programs and practices that may foster LGBT youth mental health. Implications for clinical care are discussed, and important areas for new research and practice are identified.
Design of a 3R Cobot Using Continuously Variable Transmissions
Cobots are capable of producing virtual surfaces of high quality, using mechanical transmission elements as their basic element in place of conventional motors. Most cobots built to date have used steerable wheels as their transmission elements. We describe how continuously variable transmissions (CVTs) can be used in this capacity for a cobot with revolute joints. The design of an “arm-like” cobot with a threedimensional workspace is described. This cobot can implement virtual surfaces and other effects in a spherical workspace approximately 1.5 meters in diameter. Novel elements of this cobot include the use of a power disk that couples three CVTs directly.
CALIBRATION FOR INCREASED ACCURACY OF THE RANGE IMAGING CAMERA SWISSRANGER
Range imaging is a new suitable choice for measurement and modeling in many different applications. But due to the technology’s relatively new appearance on the market with a few different realizations, the knowledge of its capabilities is very low. In most applications, like robotics and measurement systems, the accuracy wanted, lies at some millimeters. The raw data of range imaging cameras do not reach this level. Therefore, the calibration of the sensors output is needed. In this paper some of the parameters which influence the behavior and performance of the range imaging camera SwissRanger (provided by the Swiss Center for Electronics and Microtechnology CSEM) are described. Because of the highly systematic structure and correlations between parameters and output data, a parameter based calibration approach is presented. This includes a photogrammetric camera calibration and a distance system calibration with respect to the reflectivity and the distance itself.
A Study of Lexical Distribution in Citation Contexts through the IMRaD Standard
In this paper we present a large-scale approach for the extraction of verbs in reference contexts. We analyze citation contexts in relation with the IMRaD structure of scientific articles and use rank correlation analysis to characterize the distances between the section types. The results show strong differences in the verb frequencies around citations between the sections in the IMRaD structure. This study is a ”one-more-step” towards the lexical and semantic analysis of citation contexts.
Plug-and-Work Material Handling Systems
One disadvantage of automated material handling systems is their relative inflexibility: once racks are installed and conveyors are laid, making even minor changes to a system can be cumbersome and expensive. However, recent progress in the capabilities and cost of basic system components, such as controllers, drives, and sensors, has made possible a new class of material handling systems having a much higher degree of flexibility. We propose underlying design principles for such systems and describe several prototype “plug-and-work” systems, which feature decentralized control and ease of reconfiguration. 1 Current Challenges The companies that comprise the Material Handling Industry are conservative by nature, and for good reason: heavy loads must be handled without injury; large volumes of different items must be handled without damage or loss; and picking, packing and inventory tracking must be done without error. It comes as no surprise, then, that manufacturers and users tend to be careful when designing and implementing new systems. Although the industry has implemented hundreds of successful automated material handling systems in the past, there are signs that business is or will be getting more difficult. For instance, a survey in Germany showed that users of automated material handling systems have reduced their investments in automated systems or are planning to “de-automate” significantly (see Figure 1). Although this study is now somewhat dated, we believe its underlying message is still valid. The study contends that many users consider automated systems too inflexible to cope with rapidly changing demands in structure, volume, and processes of today’s material handling processes. But what is it, exactly, that makes a modern, automated material handling system inflexible? We believe the following logic is behind this judgment: (1) A system is inflexible because it is time consuming and expensive to change in response to new requirements; and (2) current systems are difficult to change because functionality is spread over several levels of information, electrical, and mechanical networks, all of which must be changed simultaneously to provide new functionality. Figure 1: Reduction in automation in several production areas, according to ISI [3]. Statistics indicate the percentage of respondents who were planning or who had already implemented a reduction in automation. 1.1 Current (inflexible) designs We illustrate the point with a small example: Suppose an automated pallet warehouse retrieves items with an AS/RS-machine and moves them by conveyor to a transfer-carriage, which moves the pallet to the assigned picking position. After picking, the pallet is returned to the rack. The picked parts then move to a sorting and consolidation area. Suppose the need for higher picking capacity makes necessary an additional picking location, and that (luckily) there is enough space to accommodate that position. Which tasks must be performed to make this simple extension? First, additional sensors must be installed at the new stopping position. They signal to the PLC when the transfer-carriage is positioned correctly at the handover position. Second, let us assume that, in keeping with current practice, the necessary drives, rollers, sensors and structure-elements are already combined into one module. This module must be installed at the additional position. Third, the wiring between the drives, sensors of the picking position, and the sensors of the transfer-carriage one end and the control cabinet on the other end have to be installed and connected. If necessary, new control boxes have to be added to provide the required input-ouput ports for the PLC. Fourth, the input and output ports must be connected to the logic of the PLC. Essentially, this requires adding another position in the scheduling logic; that is, adding the sensors of the new position in the “check for free location” logic by connecting the port numbers of the sensors with the logical view of the added position. When the new position is actuFigure 2: High-bay warehouse with picking positions (“new” indicates added picking position). ally used, the code for moving the transfer-carriage to the new position has to be updated to include the new position as well as for the return of the pallet. Finally, the user-interface of the PLC has to be updated with the status of the new position. In order to do all this, the company must employ or hire personnel able to do the electrical work, and to link the physical view (mechanical, electrical) with the software logic. This is very important because the function of the system is distributed over several elements which are connected by a central control logic, usually implemented in a PLC. Documentation brings together the several areas of expertise needed to perform the necessary tasks to change the mechanical design, the wiring, sensors, drives, and controlling logic. These activities take at least one day, depending on the amount of work necessary for wiring and coding, and they usually cannot be done by the system operators. Instead, contractors or qualified personnel have to be brought to the site, along with the necessary equipment and material. We hope the reader acknowledges that such effort for such a small system change is unappealing at best, and that it supports the assertion that many automated material handling systems are “inflexible.”
Recent Trends in Hospitalization for Acute Myocardial Infarction in Beijing: Increasing Overall Burden and a Transition From ST-Segment Elevation to Non-ST-Segment Elevation Myocardial Infarction in a Population-Based Study.
Comparable data on trends of hospitalization rates for ST-segment elevation myocardial infarction (STEMI) and non-STEMI (NSTEMI) remain unavailable in representative Asian populations.To examine the temporal trends of hospitalization for acute myocardial infarction (AMI) and its subtypes in Beijing.Patients hospitalized for AMI in Beijing from January 1, 2007 to December 31, 2012 were identified from the validated Hospital Discharge Information System. Trends in hospitalization rates, in-hospital mortality, length of stay (LOS), and hospitalization costs were analyzed by regression models for total AMI and for STEMI and NSTEMI separately. In total, 77,943 patients were admitted for AMI in Beijing during the 6 years, among whom 67.5% were males and 62.4% had STEMI. During the period, the rate of AMI hospitalization per 100,000 population increased by 31.2% (from 55.8 to 73.3 per 100,000 population) after age standardization, with a slight decrease in STEMI but a 3-fold increase in NSTEMI. The ratio of STEMI to NSTEMI decreased dramatically from 6.5:1.0 to 1.3:1.0. The age-standardized in-hospital mortality decreased from 11.2% to 8.6%, with a significant decreasing trend evident for STEMI in males and females (P < 0.001) and for NSTEMI in males (P = 0.02). The rate of percutaneous coronary intervention increased from 28.7% to 55.6% among STEMI patients. The total cost for AMI hospitalization increased by 56.8% after adjusting for inflation, although the LOS decreased by 1 day.The hospitalization burden for AMI has been increasing in Beijing with a transition from STEMI to NSTEMI. Diverse temporal trends in AMI subtypes from the unselected "real-world" data in Beijing may help to guide the management of AMI in China and other developing countries.
EphB2 receptor controls proliferation/migration dichotomy of glioblastoma by interacting with focal adhesion kinase
Glioblastoma multiforme (GBM) is the most frequent and aggressive primary brain tumors in adults. Uncontrolled proliferation and abnormal cell migration are two prominent spatially and temporally disassociated characteristics of GBMs. In this study, we investigated the role of the receptor tyrosine kinase EphB2 in controlling the proliferation/migration dichotomy of GBM. We studied EphB2 gain of function and loss of function in glioblastoma-derived stem-like neurospheres, whose in vivo growth pattern closely replicates human GBM. EphB2 expression stimulated GBM neurosphere cell migration and invasion, and inhibited neurosphere cell proliferation in vitro. In parallel, EphB2 silencing increased tumor cell proliferation and decreased tumor cell migration. EphB2 was found to increase tumor cell invasion in vivo using an internally controlled dual-fluorescent xenograft model. Xenografts derived from EphB2-overexpressing GBM neurospheres also showed decreased cellular proliferation. The non-receptor tyrosine kinase focal adhesion kinase (FAK) was found to be co-associated with and highly activated by EphB2 expression, and FAK activation facilitated focal adhesion formation, cytoskeleton structure change and cell migration in EphB2-expressing GBM neurosphere cells. Taken together, our findings indicate that EphB2 has pro-invasive and anti-proliferative actions in GBM stem-like neurospheres mediated, in part, by interactions between EphB2 receptors and FAK. These novel findings suggest that tumor cell invasion can be therapeutically targeted by inhibiting EphB2 signaling, and that optimal antitumor responses to EphB2 targeting may require concurrent use of anti-proliferative agents.
Efficient Neural Architecture Search with Network Morphism
While neural architecture search (NAS) has drawn increasing attention for automatically tuning deep neural networks, existing search algorithms usually suffer from expensive computational cost. Network morphism, which keeps the functionality of a neural network while changing its neural architecture, could be helpful for NAS by enabling a more efficient training during the search. However, network morphism based NAS is still computationally expensive due to the inefficient process of selecting the proper morph operation for existing architectures. As we know, Bayesian optimization has been widely used to optimize functions based on a limited number of observations, motivating us to explore the possibility of making use of Bayesian optimization to accelerate the morph operation selection process. In this paper, we propose a novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search by introducing a neural network kernel and a tree-structured acquisition function optimization algorithm. With Bayesian optimization to select the network morphism operations, the exploration of the search space is more efficient. Moreover, we carefully wrapped our method into an open-source software, namely Auto-Keras for people without rich machine learning background to use. Intensive experiments on real-world datasets have been done to demonstrate the superior performance of the developed framework over the state-of-the-art baseline methods.
Moving beyond the Turing Test with the Allen AI Science Challenge
Answering questions correctly from standardized eighth-grade science tests is itself a test of machine intelligence.
Psychiatric diagnoses of 901 inpatients seen by consultation-liaison psychiatrists at an academic medical center in a managed care environment.
The authors reviewed the diagnoses from all inpatient psychiatric consultations conducted by faculty psychiatrists during calendar year 2001 (N = 901) at an academic medical center In about 25% of the consultations, multiple psychiatric diagnoses were made. The most frequent diagnosis groups were mood (40.7%), cognitive (32.0%), and substance use disorders (18.6%). Among 671 consultations in which only one diagnosis was made, the rates of these diagnosis groups were 35.4%, 20.1%, and 10.2%, respectively. The findings were compared with the findings of 19 previous studies published over the past 27 years. Mood, cognitive, and substance use disorders remain major foci of consultation-liaison practice in the managed care era, although the rate of cognitive disorder diagnoses has increased. No evidence was found of a change over time in referral rates.
Epidemiology, diagnosis, and antimicrobial treatment of acute bacterial meningitis.
The epidemiology of bacterial meningitis has changed as a result of the widespread use of conjugate vaccines and preventive antimicrobial treatment of pregnant women. Given the significant morbidity and mortality associated with bacterial meningitis, accurate information is necessary regarding the important etiological agents and populations at risk to ascertain public health measures and ensure appropriate management. In this review, we describe the changing epidemiology of bacterial meningitis in the United States and throughout the world by reviewing the global changes in etiological agents followed by specific microorganism data on the impact of the development and widespread use of conjugate vaccines. We provide recommendations for empirical antimicrobial and adjunctive treatments for clinical subgroups and review available laboratory methods in making the etiological diagnosis of bacterial meningitis. Finally, we summarize risk factors, clinical features, and microbiological diagnostics for the specific bacteria causing this disease.
The giant knot: a new one-way self-locking secured arthroscopic slip knot.
A secure slip knot is very important in the field of arthroscopy. The new giant knot, developed by the first author, has the properties of being a one-way self-locking slip knot, which is secured without additional half hitches and can tolerate higher forces to be untied.
Wideband 3×4 Butler matrix using Wilkinson divider for MIMO applications
This paper presents the design and simulation of a broadband 3×4 Butler matrix for the IEEE 802.11 b/g/n and ISM bands. The aim of this study is to develop an antenna array feeding networks for Multiple-Input Multiple-Output (MIMO) applications, based on an asymmetric Butler matrix. The asymmetric structure allows to create a further beam on the array's normal axis, in addition to the same beams which are created by the symmetrical version. The proposed circuit presents a high isolation and wideband features. The circuit can be used for both transmission and reception systems to ensure the Multi-User MIMO (MU-MIMO) service.
A Grassroots Remote Sensing Toolkit Using Live Coding, Smartphones, Kites and Lightweight Drones
This manuscript describes the development of an android-based smartphone application for capturing aerial photographs and spatial metadata automatically, for use in grassroots mapping applications. The aim of the project was to exploit the plethora of on-board sensors within modern smartphones (accelerometer, GPS, compass, camera) to generate ready-to-use spatial data from lightweight aerial platforms such as drones or kites. A visual coding 'scheme blocks' framework was used to build the application ('app'), so that users could customise their own data capture tools in the field. The paper reports on the coding framework, then shows the results of test flights from kites and lightweight drones and finally shows how open-source geospatial toolkits were used to generate geographical information system (GIS)-ready GeoTIFF images from the metadata stored by the app. Two Android smartphones were used in testing-a high specification OnePlus One handset and a lower cost Acer Liquid Z3 handset, to test the operational limits of the app on phones with different sensor sets. We demonstrate that best results were obtained when the phone was attached to a stable single line kite or to a gliding drone. Results show that engine or motor vibrations from powered aircraft required dampening to ensure capture of high quality images. We demonstrate how the products generated from the open-source processing workflow are easily used in GIS. The app can be downloaded freely from the Google store by searching for 'UAV toolkit' (UAV toolkit 2016), and used wherever an Android smartphone and aerial platform are available to deliver rapid spatial data (e.g. in supporting decision-making in humanitarian disaster-relief zones, in teaching or for grassroots remote sensing and democratic mapping).
Natural feature tracking for extendible robust augmented realities
Vision-based tracking systems for augmented reality often require that artificial fiducials be placed in the scene. In this paper we utilize our approach for robust detection and tracking of natural features such as textures or corners. The tracked natural features are automatically calibrated to the fiducials that are used to initialize and facilitate normal tracking. Once calibrated, the natural features are used to extend the system’s tracking range and to stabilize the tracked pose against occlusions and noise. The emphasis of this paper is the integration of natural feature tracking with fiducial tracking to increase the range and robustness of vision-based augmented reality tracking.
Nonoperative treatment of midportion Achilles tendinopathy: a systematic review.
OBJECTIVE The aim of this systematic review is to provide an easily accessible, clear summary of the best available evidence for nonoperative treatment of midportion Achilles tendinopathy. DATA SOURCES MEDLINE, CINAHL, and Embase through April 2007. Search terms: achilles tendon or tendo achilles or triceps surae or tendoachilles or tendo-achilles or achilles AND tendinopathy or tendinosis or tendonitis or tenosynovitis. STUDY SELECTION Of 707 abstracts reviewed, 16 randomized trials met our inclusion criteria. DATA EXTRACTION Data extracted from each paper included: patient demographics (age and sex), duration of symptoms, method of diagnosis, treatments, cohort size, length of follow-up, pain-related outcome data, and secondary outcome data. DATA SYNTHESIS The primary outcome measurement was change in numeric pain score. Focal tenderness, tendon thickness, and validated outcome scores were used secondarily. Eccentric exercises were noted to be equivalent to extracorporeal shockwave therapy (1 study) and superior to wait-and-see treatment (2 trials), traditional concentric exercise (2 of 3 trials), and night splints (1 study). Extracorporeal shockwave therapy was shown to be superior to a wait-and-see method in 1 study but not superior to placebo in another. Sclerosing injections were shown to be superior to placebo in 1 study, but local steroid treatment was beneficial in 2 of 3 studies. Injection of deproteinized hemodialysate and topical glyceryl nitrate application were beneficial in 1 trial each. CONCLUSIONS Eccentric exercises have the most evidence of effectiveness in treatment of midportion Achilles tendinopathy. More investigation is needed into the utility of extracorporeal shockwave therapy, local corticosteroid treatments, injections of sclerosing agents or deproteinized hemodialysate, and topical glyceryl nitrate application.
Auditory displays in human-machine interfaces
Auditory displays are described for several application domains: transportation, industrial processes, health care, operation theaters, and service sectors. Several types of auditory displays are compared, such as warning, state, and intent displays. Also, the importance for blind people in a visualized world is considered with suitable approaches. The service robot domain has been chosen as an example for the future use of auditory displays within multimedia process supervision and control applications in industrial, transportation, and medical systems. The design of directional sounds and of additional sounds for robot states, as well as the design of more complicated robot sound tracks, are explained. Basic musical elements and robot movement sounds have been combined. Two exploratory experimental studies, one on the understandability of the directional sounds and the robot state sounds as well as another on the auditory perception of intended robot trajectories in a simulated supermarket scenario, are described. Subjective evaluations of sound characteristics such as urgency, expressiveness, and annoyance have been carried out by nonmusicians and musicians. These experimental results are briefly compared with time-frequency analyses.
Modeling orientation fields of fingerprints with rational complex functions
In this paper, a novel model is proposed for the orientation %eld of %ngerprints, which can be expressed as the argument of a rational complex function. It is suitable for all types of %ngerprints. Experimental results show that it performs much better than the previous works. ? 2003 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Virtualizing Deep Neural Networks for Memory-Efficient Neural Network Design
The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.
Design of a novel variable stiffness gripper using permanent magnets
This paper presents the design of a novel variable stiffness gripper with two parallel fingers (jaws). Compliance of the system is generated by using permanent magnets as the nonlinear springs. Based on the presented design, the position and stiffness level of the fingers can be adjusted simultaneously by changing the air gap between the magnets. The modeling of magnetic repulsion force and stiffness are presented and verified experimentally. An experiment is also conducted to demonstrate the functionality of the gripper to improve safety when a fragile object was grasped and the gripper collided with an obstacle.
Effect of the damping function in dispersion corrected density functional theory
It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard "zero-damping" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.
The ground truth about metadata and community detection in networks
Across many scientific domains, there is a common need to automatically extract a simplified view or coarse-graining of how a complex system’s components interact. This general task is called community detection in networks and is analogous to searching for clusters in independent vector data. It is common to evaluate the performance of community detection algorithms by their ability to find so-called ground truth communities. This works well in synthetic networks with planted communities because these networks’ links are formed explicitly based on those known communities. However, there are no planted communities in real-world networks. Instead, it is standard practice to treat some observed discrete-valued node attributes, or metadata, as ground truth. We show that metadata are not the same as ground truth and that treating them as such induces severe theoretical and practical problems. We prove that no algorithm can uniquely solve community detection, and we prove a general No Free Lunch theorem for community detection, which implies that there can be no algorithm that is optimal for all possible community detection tasks. However, community detection remains a powerful tool and node metadata still have value, so a careful exploration of their relationship with network structure can yield insights of genuine worth. We illustrate this point by introducing two statistical techniques that can quantify the relationship between metadata and community structure for a broad class of models. We demonstrate these techniques using both synthetic and real-world networks, and for multiple types of metadata and community structures.
A Bag-of-entities Approach to Document Focus Time Estimation
Detecting the document focus time, defined as the time the content of a document refers to, is an important task to support temporal information retrieval systems. In this paper we propose a novel approach to focus time estimation based on a bag-of-entity representation. In particular, we are interested in understanding if and to what extent existing open data sources can be leveraged to achieve focus time estimation. We leverage state of the art Named Entity Extraction tools and exploit links to Wikipedia and DBpedia to derive temporal information relevant to entities, namely years and intervals of years. We then estimate focus time as the point in time that is more relevant to the entity set associated to a document. Our method does not rely on explicit temporal expressions in the documents, so it is therefore applicable to a general context. We tested our methodology on two datasets of historical events and evaluated it against a state of the art approach, measuring improvement in average estimation error.
FUZZY Based PID Controller for Speed Control of D . C . Motor Using LabVIEW
This paper presents an implementation of self-tuned PID controller (FPID) for speed control of DC motor based on LabVIEW (Laboratory Virtual Instrument Engineering Workbench Environment). The algorithms of fuzzy PID controller (FPID) and conventional PID controller (CPID) are implemented using PID and Fuzzy Logic simulation toolkits of the Lab View environment. The simulation results demonstrate that the designed self-tuned PID controller realize a perfect speed tracking with lesser rise time and settling time, minimum steady state error and give better performance compared to conventional PID controller. Key-Words: DC Motor, Ziegler-Nichols Tuning, Speed Control, Fuzzy Logic and Fuzzy plus PID controller, LabVIEW.
Antenna diversity in mobile communications
The conditions for antenna diversity action are investigated. In terms of the fields, a condition is shown to be that the incident field and the far field of the diversity antenna should obey (or nearly obey) an orthogonality relationship. The role of mutual coupling is central, and it is different from that in a conventional array antenna. In terms of antenna parameters, a sufficient condition for diversity action for a certain class of high gain antennas at the mobile, which approximates most practical mobile antennas, is shown to be zero (or low) mutual resistance between elements. This is not the case at the base station, where the condition is necessary only. The mutual resistance condition offers a powerful design tool, and examples of new mobile diversity antennas are discussed along with some existing designs.
Effect of increasing glutathione with cysteine and glycine supplementation on mitochondrial fuel oxidation, insulin sensitivity, and body composition in older HIV-infected patients.
BACKGROUND HIV-infected patients are reported to have impaired oxidation of fatty acids despite increased availability, suggesting a mitochondrial defect. We investigated whether diminished levels of a key mitochondrial antioxidant, glutathione (GSH), was contributing to defective fatty acid oxidation in older HIV-infected patients, and if so, the metabolic mechanisms contributing to GSH deficiency in these patients. METHODS In an open-label design, 8 older GSH-deficient HIV-infected males were studied before and after 14 days of oral supplementation with the GSH precursors cysteine and glycine. A combination of stable-isotope tracers, calorimetry, hyperinsulinemic-euglycemic clamp, and dynamometry were used to measure GSH synthesis, fasted and insulin-stimulated (fed) mitochondrial fuel oxidation, insulin sensitivity, body composition, anthropometry, forearm-muscle strength, and lipid profiles. RESULTS Impaired synthesis contributed to GSH deficiency in the patients and was restored with cysteine plus glycine supplementation. GSH improvement was accompanied by marked improvements in fasted and fed mitochondrial fuel oxidation. Associated benefits included improvements in insulin sensitivity, body composition, anthropometry, muscle strength, and dyslipidemia. CONCLUSIONS This work identifies 2 novel findings in older HIV-infected patients: 1) diminished synthesis due to decreased availability of cysteine and glycine contributes to GSH deficiency and can be rapidly corrected by dietary supplementation of these precursors and 2) correction of GSH deficiency is associated with improvement of mitochondrial fat and carbohydrate oxidation in both fasted and fed states and with improvements in insulin sensitivity, body composition, and muscle strength. The role of GSH on ameliorating metabolic complications in older HIV-infected patients warrants further investigation.
Horizontal transfer of transposons between and within crustaceans and insects
BackgroundHorizontal transfer of transposable elements (HTT) is increasingly appreciated as an important source of genome and species evolution in eukaryotes. However, our understanding of HTT dynamics is still poor in eukaryotes because the diversity of species for which whole genome sequences are available is biased and does not reflect the global eukaryote diversity.ResultsIn this study we characterized two Mariner transposable elements (TEs) in the genome of several terrestrial crustacean isopods, a group of animals particularly underrepresented in genome databases. The two elements have a patchy distribution in the arthropod tree and they are highly similar (>93% over the entire length of the element) to insect TEs (Diptera and Hymenoptera), some of which were previously described in Ceratitis rosa (Crmar2) and Drosophila biarmipes (Mariner-5_Dbi). In addition, phylogenetic analyses and comparisons of TE versus orthologous gene distances at various phylogenetic levels revealed that the taxonomic distribution of the two elements is incompatible with vertical inheritance.ConclusionsWe conclude that the two Mariner TEs each underwent at least three HTT events. Both elements were transferred once between isopod crustaceans and insects and at least once between isopod crustacean species. Crmar2 was also transferred between tephritid and drosophilid flies and Mariner-5 underwent HT between hymenopterans and dipterans. We demonstrate that these various HTTs took place recently (most likely within the last 3 million years), and propose iridoviruses and/or Wolbachia endosymbionts as potential vectors of these transfers.
New Measurements of the River Environment: River Ecosystem Health Assessment in China
With the developing awareness on the natural attributes of rivers and the problems of river ecosystems, the objectives of water environment management should extend from single water quality protection to the overall management of aquatic ecosystems. River health assessment can be used as a water environment management tool, which is important to sustainable management of watershed and regional ecological protection. However, the river health concept hasn't been introduced into environmental measurement systems, and a set of national river health assessment system is absent in China. As a result, in this paper, the significance of river health to water environmental management is discussed, and the methods and recommendations on building Chinese river health assessment systems are proposed.
A protocol for classifying normal- and flat-arched foot posture for research studies using clinical and radiographic measurements
BACKGROUND There are several clinical and radiological methods available to classify foot posture in research, however there is no clear strategy for selecting the most appropriate measurements. Therefore, the aim of this study was to develop a foot screening protocol to distinguish between participants with normal- and flat-arched feet who would then subsequently be recruited into a series of laboratory-based gait studies. METHODS The foot posture of ninety-one asymptomatic young adults was assessed using two clinical measurements (normalised navicular height and arch index) and four radiological measurements taken from antero-posterior and lateral x-rays (talus-second metatarsal angle, talo-navicular coverage angle, calcaneal inclination angle and calcaneal-first metatarsal angle). Normative foot posture values were taken from the literature and used to recruit participants with normal-arched feet. Data from these participants were subsequently used to define the boundary between normal- and flat-arched feet. This information was then used to recruit participants with flat-arched feet. The relationship between the clinical and radiographic measures of foot posture was also explored. RESULTS Thirty-two participants were recruited to the normal-arched study, 31 qualified for the flat-arched study and 28 participants were classified as having neither normal- or flat-arched feet and were not suitable for either study. The values obtained from the two clinical and four radiological measurements established two clearly defined foot posture groups. Correlations among clinical and radiological measures were significant (p < 0.05) and ranged from r = 0.24 to 0.70. Interestingly, the clinical measures were more strongly associated with the radiographic angles obtained from the lateral view. CONCLUSION This foot screening protocol provides a coherent strategy for researchers planning to recruit participants with normal- and flat-arched feet. However, further research is required to determine whether foot posture variations in the sagittal, transverse or both planes provide the best descriptor of the flat foot.
Two-Dimensional Analytical Permanent-Magnet Eddy-Current Loss Calculations in Slotless PMSM Equipped With Surface-Inset Magnets
Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.
Privacy preserving EHR system using attribute-based infrastructure
Secure management of Electronic Health Records (EHR) in a distributed computing environment such as cloud computing where computing resources including storage is provided by a third party service provider is a challenging task. In this paper, we explore techniques which guarantees security and privacy of medical data stored in the cloud. We show how new primitives in attribute-based cryptography can be used to construct a secure and privacy-preserving EHR system that enables patients to share their data among healthcare providers in a flexible, dynamic and scalable manner.
“Exascale Computing and Big Data: The Next Frontier,”
For scientific and engineering computing, exascale (10 operations per second) is the next proxy in the long trajectory of exponential performance increases that has continued for over half a century. Similarly, large-scale data preservation and sustainability within and across disciplines, metadata creation and multidisciplinary fusion, and digital privacy and security define the frontiers of big data. Solving the myriad technical, political and economic challenges of exascale computing will require coordinated planning across government, industry and academia, commitment to data sharing and sustainability, collaborative research and development, and recognition that both international competition and collaboration will be necessary.
Evaluation of different approaches for lipase catalysed synthesis of citronellyl acetate
Three different approaches for lipase catalysed synthesis of citronellyl acetate by a commercial available immobilized lipase have been studied: a) direct esterification reaction of citronellol with acetic acid; b) alcoholysis of butyl acetate with citronellol and c) transesterification of citronellyl butyrate with butylacetate. Heptane was used as solvent for all the experiments. The extent of reaction occurred in the order alcoholysis>transesterification>esterification. Substrate partitioning between the immobilization support and the organic medium seems to greatly influence the catalytic performance of this enzyme preparation. Production of citronellyl acetate, was found to be dependent on the partition coefficient of the acyl donor which account the greatest value for the acetic acid.
Oxaprozin cross-reactivity in three commercial immunoassays for benzodiazepines in urine.
Immunoassay methods are commonly used to screen for drugs of abuse and some prescription drug classes as part of drug-testing programs in clinical and forensic toxicology. Oxaprozin (Daypro) is a new nonsteroidal anti-inflammatory drug that is widely prescribed in North America and has been reported to cross-react for benzodiazepines in several different immunoassay methods. The first objective of this study was to characterize the immunoreactivity of oxaprozin standards over a wide concentration range when analyzed by the EMIT dau, Abbott FPIA, and BMC CEDIA urine benzodiazepine assays. The second objective was to measure the immunoreactivity of urine specimens obtained from 12 subjects after receiving a single oral dose (1200 mg) of oxaprozin. Urine oxaprozin standards were prepared in drug-free urine at seven concentrations ranging from 500 to 100,000 ng/mL. The standards gave presumptive positive benzodiazepine results between 5000 and 10,000 ng/mL (EMIT dau) and approximately 10,000 ng/mL (FPIA, CEDIA). With a 200-ng/mL cutoff for benzodiazepines in these assays, all 36 urine specimens collected from the 12 subjects gave positive results by EMIT and CEDIA, and 35 of 36 urine specimens were positive by FPIA. It was concluded that presumptive positive benzodiazepine results by these immunoassays may be due to the presence of oxaprozin or oxaprozin metabolites. It is recommended that all positive immunoassay screening tests for benzodiazepines be confirmed by another technique based upon a different principle of analysis.
The epidermal growth factor receptor is coupled to a pertussis toxin-sensitive guanine nucleotide regulatory protein in rat hepatocytes.
Activation of epidermal growth factor (EGF) receptors stimulates inositol phosphate production in rat hepatocytes via a pertussis toxin-sensitive mechanism, suggesting the involvement of a G protein in the process. Since the first event after receptor-G protein interaction is exchange of GTP for GDP on the G protein, the effect of EGF was measured on the initial rates of guanosine 5'-O-(3-[35S]thiotriphosphate) [( 35S]GTP gamma S) association and [alpha-32P]GDP dissociation in rat hepatocyte membranes. The initial rate of [35S]GTP gamma S binding was stimulated by EGF, with a maximal effect observed at 8 nM EGF. EGF also increased the initial rate of [alpha-32P]GDP dissociation. The effect of EGF on [35S]GTP gamma S association was blocked by boiling the peptide for 5 min in 5 mM dithiothreitol or by incubation of the membranes with guanosine 5'-O-(2-thiodiphosphate) (GDP beta S). EGF-stimulated [35S]GTP gamma S binding was completely abolished in hepatocyte membranes prepared from pertussis toxin-treated rats and was inhibited in hepatocyte membranes that were treated directly with the resolved A-subunit of pertussis toxin. The amount of guanine nucleotide binding affected by occupation of the EGF receptor was approximately 6 pmol/mg of membrane protein. Occupation of angiotensin II receptors, which are known to couple to G proteins in hepatic membranes, also stimulated [35S]GTP gamma S association with and [alpha-32P]GDP dissociation from the membranes. The effect of angiotensin II on [alpha-32P]GDP dissociation was blocked by the angiotensin II receptor antagonist [Sar1,Ile8]angiotensin II, demonstrating that the guanine nucleotide binding was receptor-mediated. In A431 human epidermoid carcinoma cells, EGF stimulates inositol lipid breakdown, but the effect is not blocked by treatment of the cells with pertussis toxin. In these cells, EGF had no effect on [35S]GTP gamma S binding. Occupation of the beta-adrenergic receptor in A431 cell membranes with isoproterenol did stimulate [35S] GTP gamma S binding, and the effect could be completely blocked by l-propranolol. These results support the concept that in hepatocyte membranes, EGF receptors interact with a pertussis toxin-sensitive G protein via a mechanism similar to other hormone receptor-G protein interactions, but that in A431 human epidermoid carcinoma cells, EGF may activate phospholipase C via different mechanisms.
THE HAGUE CONFERENCE ON AGRICULTURE, FOOD SECURITY AND CLIMATE CHANGE
The contents and conclusions of this report are considered appropriate for the time of its preparation. They may be modified in the light of further knowledge gained at subsequent stages. The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. All rights reserved. Reproduction and dissemination of material in this information product for educational or other non-commercial purposes are authorized without any prior written permission from the copyright holders provided the source is fully acknowledged. Reproduction of material in this information product for resale or other commercial purposes is prohibited without written permission of the copyright holders. Agriculture in developing countries must undergo a significant transformation in order to meet the related challenges of achieving food security and responding to climate change. Projections based on population growth and food consumption patterns indicate that agricultural production will need to increase by at least 70 percent to meet demands by 2050. Most estimates also indicate that climate change is likely to reduce agricultural productivity, production stability and incomes in some areas that already have high levels of food insecurity. Developing climate-smart agriculture 1 is thus crucial to achieving future food security and climate change goals. This paper examines some of the key technical, institutional, policy and financial responses required to achieve this transformation. Building on case studies from the field, the paper outlines a range of practices, approaches and tools aimed at increasing the resilience and productivity of agricultural production systems, while also reducing and removing emissions. The second part of the paper surveys institutional and policy options available to promote the transition to climate-smart agriculture at the smallholder level. Finally, the paper considers current financing gaps and makes innovative suggestions regarding the combined use of different sources, financing mechanisms and delivery systems. 1) Agriculture in developing countries must undergo a significant transformation in order to meet the related challenges of …
Contributing to marine pollution by washing your face: microplastics in facial cleansers.
Plastics pollution in the ocean is an area of growing concern, with research efforts focusing on both the macroplastic (>5mm) and microplastic (<5mm) fractions. In the 1990 s it was recognized that a minor source of microplastic pollution was derived from liquid hand-cleansers that would have been rarely used by the average consumer. In 2009, however, the average consumer is likely to be using microplastic-containing products on a daily basis, as the majority of facial cleansers now contain polyethylene microplastics which are not captured by wastewater plants and will enter the oceans. Four microplastic-containing facial cleansers available in New Zealand supermarkets were used to quantify the size of the polythelene fragments. Three-quarters of the brands had a modal size of <100 microns and could be immediately ingested by planktonic organisms at the base of the food chain. Over time the microplastics will be subject to UV-degradation and absorb hydrophobic materials such as PCBs, making them smaller and more toxic in the long-term. Marine scientists need to educate the public to the dangers of using products that pose an immediate and long-term threat to the health of the oceans and the food we eat.
Investigating Information Systems Strategic Alignment
Although strategic alignment is an important theoretical concept, and achieving alignment or fit among organizational elements would seem to be an important business objective, empirical demonstrations of the importance of alignment have been uncommon (White 1986). This study attempted to provide an empirical assessment of the nature and importance of information systems strategic alignlnent There is much discussion today about the importance of 1. THE CONCEPTUAL MODEL strategic alignment in general (e.g., Venkatraman and Camillus 1984; White 1986) and information systems (IS) The conceptual model underlying the study is depicted in strategic alignment in particular (e.g., Elam et al. 1988; Figure 1. It depicts the propositions that IS.strategic Henderson and Thomas 1992; Henderson and Venkatraman 1992; Keen 1991; Ward, Griffiths and Whitmore 1990). alignment, defined as the fit existing between business The discussion generally suggests that the better the alignstrategy and IS strategy, impacts performance at the IS ment, the better the expected company performance. level and at the overall business level. However, there have been few empirical verifications of this proposition (White 1986). In addition, the discussion The results of the overall study, and detailed descriptions of about alignment has tended to be imprecise and vague (Van the instruments used to measure business strategy, IS de Ven and Drazin 1985). Important objectives of this strategy, IS effectiveness and business performance, have study were to identify a useful, precise model for the IS been described in Chan et al. (1993) and Chan and Huff strategic alignment construct and to examine empirically (1992) respectively. (Table 1 provides a brief outline of relationships among IS strategic alignment; IS effectiveness, the overall study.) This paper discusses the derivation and and business performance. significance of the IS strategic alignment computations and The discussion opens with a brief overview of the concepthe study's findings regarding alignment tual model underlying this study. Next, instruments that measure business strategy and IS strategy are introduced. Then instruments that measure IS effectiveness and business performance are outlined. This is followed by a 2. MEASURING BUSINESS STRATEGY AND general discussion of various approaches that are used to INFORMATION SYSTEMS STRATEGY assess alignment. Specific models of alignment are evaluated and the primary model for IS strategic alignment used within this research is selected and described. The links In order to assess IS strategic alignment, there needed to be between IS strategic alignment and performance are then assessments of both business strategy and IS strategy. discussed. Finally, the paper concludes with a description Table 2 lists the dimensions of these constructs as operaof the study's implications for management and future tionalized in this study. Multiple indicators were used to research. measure each dimension.
Clusters, language models, and ad hoc information retrieval
The language-modeling approach to information retrieval provides an effective statistical framework for tackling various problems and often achieves impressive empirical performance. However, most previous work on language models for information retrieval focused on document-specific characteristics, and therefore did not take into account the structure of the surrounding corpus, a potentially rich source of additional information. We propose a novel algorithmic framework in which information provided by document-based language models is enhanced by the incorporation of information drawn from clusters of similar documents. Using this framework, we develop a suite of new algorithms. Even the simplest typically outperforms the standard language-modeling approach in terms of mean average precision (MAP) and recall, and our new interpolation algorithm posts statistically significant performance improvements for both metrics over all six corpora tested. An important aspect of our work is the way we model corpus structure. In contrast to most previous work on cluster-based retrieval that partitions the corpus, we demonstrate the effectiveness of a simple strategy based on a nearest-neighbors approach that produces overlapping clusters.
Mindfulness based cognitive therapy versus treatment as usual in adults with attention deficit hyperactivity disorder (ADHD)
BACKGROUND Adults with attention deficit hyperactivity disorder (ADHD) often present with a lifelong pattern of core symptoms that is associated with impairments of functioning in daily life. This has a substantial personal and economic impact. In clinical practice there is a high need for additional or alternative interventions for existing treatments, usually consisting of pharmacotherapy and/or psycho-education. Although previous studies show preliminary evidence for the effectiveness of mindfulness-based interventions in reducing ADHD symptoms and improving executive functioning, these studies have methodological limitations. This study will take account of these limitations and will examine the effectiveness of Mindfulness Based Cognitive Therapy (MBCT) in further detail. METHODS/DESIGN A multi-centre, parallel-group, randomised controlled trial will be conducted in N = 120 adults with ADHD. Patients will be randomised to MBCT in addition to treatment as usual (TAU) or TAU alone. Assessments will take place at baseline and at three, six and nine months after baseline. Primary outcome measure will be severity of ADHD symptoms rated by a blinded clinician. Secondary outcome measures will be self-reported ADHD symptoms, executive functioning, mindfulness skills, self-compassion, positive mental health and general functioning. In addition, a cost-effectiveness analysis will be conducted. DISCUSSION This trial will offer valuable information about the clinical and cost-effectiveness of MBCT in addition to TAU compared to TAU alone in adults swith ADHD. TRIAL REGISTRATION ClinicalTrials.gov NCT02463396. Registered 8 June 2015.
Real-Time Machine Learning: The Missing Pieces
Machine learning applications are increasingly deployed not only to serve predictions using static models, but also as tightly-integrated components of feedback loops involving dynamic, real-time decision making. These applications pose a new set of requirements, none of which are difficult to achieve in isolation, but the combination of which creates a challenge for existing distributed execution frameworks: computation with millisecond latency at high throughput, adaptive construction of arbitrary task graphs, and execution of heterogeneous kernels over diverse sets of resources. We assert that a new distributed execution framework is needed for such ML applications and propose a candidate approach with a proof-of-concept architecture that achieves a 63x performance improvement over a state-of-the-art execution framework for a representative application.
Computability in the Lattice of Equivalence Relations
We investigate computability in the lattice of equivalence relations on the natural numbers. We mostly investigate whether the subsets of appropriately defined subrecursive equivalence relations -for example the set of all polynomial-time decidable equivalence relations- form sublattices of the lattice.
Treatment of poststroke pain by epidural motor cortex stimulation with a new octopolar lead.
BACKGROUND Chronic, drug-resistant neuropathic pain can be treated by surgically implanted motor cortex stimulation (MCS). The leads used for MCS have not been specifically designed for this application. OBJECTIVE To study the value of a new 8-contact lead for MCS therapy in a series of 6 patients with refractory central poststroke pain. METHODS The study comprised a 1-month randomized phase, starting 1 month after implantation, during which the neurostimulator was switched on in one-half of the patients or remained off in the other half, followed by an open phase of 10 months, during which the stimulator was switched on in all patients. Clinical assessment was performed at baseline and 1, 2, 3, 6, and 12 months after implantation with the following scales: Visual Analog Scale, Verbal Rating Scale, Brief Pain Inventory, McGill Pain Questionnaire, Sickness Impact Profile, and Medication Quantification Scale. RESULTS In the randomized phase, clinical scores were found to be globally reduced in the on- vs off-stimulation condition. In the open follow-up phase, all clinical scores improved significantly over time. The ratio between affective and sensory McGill Pain Questionnaire subscores decreased, suggesting a preferential effect of MCS on the affective component of pain. Compared with preoperative baseline, 2 patients were totally relieved of central poststroke pain, 3 patients were very much relieved, and 1 patient remained unchanged at the final examination. CONCLUSION A good clinical outcome was observed in all patients except 1, suggesting that this new octopolar lead could be used for MCS therapy to treat refractory central poststroke pain.
Analysis of L-Band Digital Aeronautical Communication Systems: L-DACS1 and L-DACS2
New air-to-ground wireless datalinks are needed to supplement existing civil aviation technologies. The 960 – 1164 MHz part of the IEEE L band has been identified as a candidate spectrum. EUROCONTROL — the European organization for the Safety of Air Navigation, has funded two parallel projects and developed two proposals called L-DACS1 and L-DACS2. Although, there is a significant amount of literature available on each of the two technologies from the two teams that designed the respective proposals, there is very little independent comparison of the two proposals. The goal of this paper is to provide this comparison. We compare the two proposals in terms of their scalability, spectral efficiency, and interference resistance. Both the technologies have to co-exist with several other aeronautical technologies that use the same L-band. 12
[Effects of an exercise training program on the quality of life of women with breast cancer on chemotherapy].
BACKGROUND Exercise may reduce anxiety and depression associated to the diagnosis and treatment of cancer. AIM To assess the effects of a physical training program during chemotherapy among women with breast cancer. PATIENTS AND METHODS Twenty two women aged 49 +/- 7 years with breast cancer voluntarily agreed to take part in the study, after surgical treatment. Functional capacity (Karnofsky Performance Status), psychological status (General Health Questionnaire, GHQ) and quality of life (EORTC QLQ-C30) were evaluated at baseline and at the end of the study. Before beginning with adjuvant chemotherapy, ten women were randomly assigned to a program of physical exercise and seven to a control group. The program lasted 18 to 22 weeks, depending on the duration of chemotherapy. RESULTS Five women were lost from follow up. Before starting chemotherapy, 41% of women were working and all had to kit. At baseline all had a normal Karnofski score and quality of life was compromised. At the end of the study, the intervention group had an improvement of their quality of life, compared to the control group that did not experience significant changes. CONCLUSIONS An exercise training program improves quality of life of women with breast cancer on chemotherapy.
Quality of Experience of VoIP Service: A Survey of Assessment Approaches and Open Issues
This survey gives a comprehensive review of recent advances related to the topic of VoIP QoE (Quality of user' Experience). It starts by providing some insight into the QoE arena and outlines the principal building blocks of a VoIP application. The sources of impairments over data IP networks are identified and distinguished from signal-oriented sources of quality degradation observed over telecom networks. An overview of existing subjective and objective methodologies for the assessment of the QoE of voice conversations is then presented outlining how subjective and objective speech quality methodologies have evolved to consider time-varying QoS transport networks. A description of practical procedures for measuring VoIP QoE and illustrative results is then given. Utilization methodology of several speech quality assessment frameworks is summarized. A survey of emerging single-ended parametric-model speech quality assessment algorithms dedicated to VoIP service is then given. In particular, after presenting a primitive single-ended parametric-model algorithm especially conceived for the evaluation of VoIP conversations, new artificial assessors of VoIP service are detailed. In particular, we describe speech quality assessment algorithms that consider, among others, packet loss burstiness, unequal importance of speech wave, and transient loss of connectivity. The following section concentrates on the integration of VoIP service over mobile data networks. The impact of quality-affecting phenomena, such as handovers and CODEC changeover are enumerated and some primary subjective results are summarized. The survey concludes with a review of open issues relating to automatic assessment of VoIP.
The distal triceps tendon footprint and a biomechanical analysis of 3 repair techniques.
BACKGROUND Anatomic repair of tendon ruptures is an important goal of surgical treatment. There are limited data on the triceps brachii insertion, footprint, and anatomic reconstruction of the distal triceps tendon. HYPOTHESIS An anatomic repair of distal triceps tendon ruptures more closely imitates the preinjury anatomy and may result in a more durable repair. STUDY DESIGN Descriptive and controlled laboratory studies. METHODS The triceps tendon footprint was measured in 27 cadaveric elbows, and a distal tendon rupture was created. Elbows were randomly assigned to 1 of 3 repair groups: cruciate repair group, suture anchor group, and anatomic repair group. Biomechanical measurement of load at yield and peak load were measured. Cyclic loading was performed for a total of 1500 cycles and displacement measured. RESULTS The average bony footprint of the triceps tendon was 466 mm2. Cyclic loading of tendons from the 3 repair types demonstrated that the anatomic repair produced the least amount of displacement when compared with the other repair types (P < .05). Load at yield and peak load were similar for all repair types (P > .05). CONCLUSION The triceps bony footprint is a large area on the olecranon that should be considered when repairing distal triceps tendon ruptures. Anatomic repair of triceps tendon ruptures demonstrated the most anatomic restoration of distal triceps ruptures and showed statistically significantly less repair-site motion when cyclically loaded. CLINICAL RELEVANCE Anatomic repair better restores preinjury anatomy compared with other types of repairs and demonstrates less repair-site motion, which may play a role in early postoperative management.
ON THE USE OF ZERO-CROSSING RATE FOR AN APPLICATION OF CLASSIFICATION OF PERCUSSIVE SOUNDS
We address the issue of automatically extracting rhythm descriptors from audio signals, to be eventually used in content-based musical applications such as in the context of MPEG7. Our aim is to approach the comprehension of auditory scenes in raw polyphonic audio signals without preliminary source separation. As a first step towards the automatic extraction of rhythmic structures out of signals taken from the popular music repertoire, we propose an approach for automatically extracting time indexes of occurrences of different percussive timbres in an audio signal. Within this framework, we found that a particular issue lies in the classification of percussive sounds. In this paper, we report on the method currently used to deal with this problem.
Necklace and Calabash: A Chinese Detective Story
Broughtback into print in the 1990s to wide acclaim, re-designed new editions of Robert van Gulik's Judge Dee Mysteries are now available. Written by a Dutch diplomat and scholar during the 1950s and 1960s, these lively and historically accurate mysteries have entertained a devoted following for decades.Set during the T'ang dynasty, they feature Judge Dee, a brilliant and cultured Confucian magistrate disdainful of personal luxury and corruption, who cleverly selects allies to help him navigate the royal courts, politics, and ethnic tensions in imperial China. Robert van Gulik modeled Judge Dee on a magistrate of that name who lived in the seventh century, and he drew on stories and literary conventions of Chinese mystery writing dating back to the Sung dynasty to construct his ingenious plots. "Necklace and Calabash" finds Judge Dee returning to his district of Poo-yang, where the peaceful town of Riverton promises a few days' fishing and relaxation. Yet a chance meeting with a Taoist recluse, a gruesome body fished out of the river, strange guests at the Kingfisher Inn, and a princess in distress thrust the judge into one of the most intricate and baffling mysteries of his career. An expert on the art and erotica as well as the literature, religion, and politics of China, van Gulik also provides charming illustrations to accompany his engaging and entertaining mysteries."
Cilostazol as an add-on therapy for patients with Alzheimer’s disease in Taiwan: a case control study
BACKGROUND Combination therapy using acetylcholinesterase inhibitors (AChEIs) and cilostazol is of unknown efficacy for patients with Alzheimer's disease (AD). METHODS We explored the therapeutic responses by using a case-control study, which was conducted in Taiwan. We enrolled 30 participants with stable AD who were receiving cilostazol (50 mg) twice per day as an add-on therapy combined with AChEIs, and 30 participants as controls who were not receiving cilostazol as an add-on therapy. The therapeutic responses were measured using neuropsychological assessments and analyzed in relation to cilostazol use, apolipoprotein E genotype, and demographic characteristics. Mini-mental state examination (MMSE) and clinical dementia rating sum of boxes (CDR-SB) were administered at the outset of the study and 12 months later. Multiple logistic regression analysis was used to estimate the association between the therapeutic response and cilostazol use. RESULTS For the therapeutic indicator of cognition, Cilostazol use (adjusted odds ratio (aOR) = 0.17, 95% confidence interval (CI) = 0.03-0.80), initial CDR-SB score (aOR = 2.06, 95% CI = 1.31-3.72), and initial MMSE score (aOR = 1.41, 95% CI = 1.11-1.90), but not age, sex, education, or ApoE ε4 status, were significantly associated with poor therapeutic outcomes. For the therapeutic indicator of global status, no significant association was observed between the covariates and poor therapeutic outcomes. CONCLUSIONS Cilostazol may reduce the decline of cognitive function in stable AD patients when applied as an add-on therapy.
Printing arbitrary meshes with a 5DOF wireframe printer
Traditional 3D printers fabricate objects by depositing material to build up the model layer by layer. Instead printing only wireframes can reduce printing time and the cost of material while producing effective depictions of shape. However, wireframe printing requires the printer to undergo arbitrary 3D motions, rather than slice-wise 2D motions, which can lead to collisions with already-printed parts of the model. Previous work has either limited itself to restricted meshes that are collision free by construction, or simply dropped unreachable parts of the model, but in this paper we present a method to print arbitrary meshes on a 5DOF wireframe printer. We formalize the collision avoidance problem using a directed graph, and propose an algorithm that finds a locally minimal set of constraints on the order of edges that guarantees there will be no collisions. Then a second algorithm orders the edges so that the printing progresses smoothly. Though meshes do exist that still cannot be printed, our method prints a wide range of models that previous methods cannot, and it provides a fundamental enabling algorithm for future development of wireframe printing.
An improved box-counting method for image fractal dimension estimation
Article history: Received 6 September 2007 Received in revised form 14 January 2009 Accepted 2 March 2009
The Distance-Weighted k-Nearest-Neighbor Rule
Among the simplest and most intuitively appealing classes of nonprobabilistic classification procedures are those that weight the evidence of nearby sample observations most heavily. More specifically, one might wish to weight the evidence of a neighbor close to an unclassified observation more heavily than the evidence of another neighbor which is at a greater distance from the unclassified observation. One such classification rule is described which makes use of a neighbor weighting function for the purpose of assigning a class to an unclassified sample. The admissibility of such a rule is also considered.